synapse-0.24.0/000077500000000000000000000000001317335640100132625ustar00rootroot00000000000000synapse-0.24.0/.github/000077500000000000000000000000001317335640100146225ustar00rootroot00000000000000synapse-0.24.0/.github/ISSUE_TEMPLATE.md000066400000000000000000000032701317335640100173310ustar00rootroot00000000000000 ### Description Describe here the problem that you are experiencing, or the feature you are requesting. ### Steps to reproduce - For bugs, list the steps - that reproduce the bug - using hyphens as bullet points Describe how what happens differs from what you expected. If you can identify any relevant log snippets from _homeserver.log_, please include those here (please be careful to remove any personal or private data): ### Version information - **Homeserver**: Was this issue identified on matrix.org or another homeserver? If not matrix.org: - **Version**: What version of Synapse is running? - **Install method**: package manager/git clone/pip - **Platform**: Tell us about the environment in which your homeserver is operating - distro, hardware, if it's running in a vm/container, etc. synapse-0.24.0/.gitignore000066400000000000000000000006671317335640100152630ustar00rootroot00000000000000*.pyc .*.swp .DS_Store _trial_temp/ logs/ dbs/ *.egg dist/ docs/build/ *.egg-info cmdclient_config.json homeserver*.db homeserver*.log homeserver*.pid homeserver*.yaml *.signing.key *.tls.crt *.tls.dh *.tls.key .coverage htmlcov demo/*/*.db demo/*/*.log demo/*/*.log.* demo/*/*.pid demo/media_store.* demo/etc uploads .idea/ media_store/ *.tac build/ localhost-800*/ static/client/register/register_config.js .tox env/ *.config synapse-0.24.0/.travis.yml000066400000000000000000000003221317335640100153700ustar00rootroot00000000000000sudo: false language: python python: 2.7 # tell travis to cache ~/.cache/pip cache: pip env: - TOX_ENV=packaging - TOX_ENV=pep8 - TOX_ENV=py27 install: - pip install tox script: - tox -e $TOX_ENV synapse-0.24.0/AUTHORS.rst000066400000000000000000000032111317335640100151360ustar00rootroot00000000000000Erik Johnston * HS core * Federation API impl Mark Haines * HS core * Crypto * Content repository * CS v2 API impl Kegan Dougal * HS core * CS v1 API impl * AS API impl Paul "LeoNerd" Evans * HS core * Presence * Typing Notifications * Performance metrics and caching layer Dave Baker * Push notifications * Auth CS v2 impl Matthew Hodgson * General doc & housekeeping * Vertobot/vertobridge matrix<->verto PoC Emmanuel Rohee * Supporting iOS clients (testability and fallback registration) Turned to Dust * ArchLinux installation instructions Brabo * Installation instruction fixes Ivan Shapovalov * contrib/systemd: a sample systemd unit file and a logger configuration Eric Myhre * Fix bug where ``media_store_path`` config option was ignored by v0 content repository API. Muthu Subramanian * Add SAML2 support for registration and login. Steven Hammerton * Add CAS support for registration and login. Mads Robin Christensen * CentOS 7 installation instructions. Florent Violleau * Add Raspberry Pi installation instructions and general troubleshooting items Niklas Riekenbrauck * Add JWT support for registration and login Christoph Witzany * Add LDAP support for authentication synapse-0.24.0/CHANGES.rst000066400000000000000000002234361317335640100150760ustar00rootroot00000000000000Changes in synapse v0.24.0 (2017-10-23) ======================================= No changes since v0.24.0-rc1 Changes in synapse v0.24.0-rc1 (2017-10-19) =========================================== Features: * Add Group Server (PR #2352, #2363, #2374, #2377, #2378, #2382, #2410, #2426, #2430, #2454, #2471, #2472, #2544) * Add support for channel notifications (PR #2501) * Add basic implementation of backup media store (PR #2538) * Add config option to auto-join new users to rooms (PR #2545) Changes: * Make the spam checker a module (PR #2474) * Delete expired url cache data (PR #2478) * Ignore incoming events for rooms that we have left (PR #2490) * Allow spam checker to reject invites too (PR #2492) * Add room creation checks to spam checker (PR #2495) * Spam checking: add the invitee to user_may_invite (PR #2502) * Process events from federation for different rooms in parallel (PR #2520) * Allow error strings from spam checker (PR #2531) * Improve error handling for missing files in config (PR #2551) Bug fixes: * Fix handling SERVFAILs when doing AAAA lookups for federation (PR #2477) * Fix incompatibility with newer versions of ujson (PR #2483) Thanks to @jeremycline! * Fix notification keywords that start/end with non-word chars (PR #2500) * Fix stack overflow and logcontexts from linearizer (PR #2532) * Fix 500 error when fields missing from power_levels event (PR #2552) * Fix 500 error when we get an error handling a PDU (PR #2553) Changes in synapse v0.23.1 (2017-10-02) ======================================= Changes: * Make 'affinity' package optional, as it is not supported on some platforms Changes in synapse v0.23.0 (2017-10-02) ======================================= No changes since v0.23.0-rc2 Changes in synapse v0.23.0-rc2 (2017-09-26) =========================================== Bug fixes: * Fix regression in performance of syncs (PR #2470) Changes in synapse v0.23.0-rc1 (2017-09-25) =========================================== Features: * Add a frontend proxy worker (PR #2344) * Add support for event_id_only push format (PR #2450) * Add a PoC for filtering spammy events (PR #2456) * Add a config option to block all room invites (PR #2457) Changes: * Use bcrypt module instead of py-bcrypt (PR #2288) Thanks to @kyrias! * Improve performance of generating push notifications (PR #2343, #2357, #2365, #2366, #2371) * Improve DB performance for device list handling in sync (PR #2362) * Include a sample prometheus config (PR #2416) * Document known to work postgres version (PR #2433) Thanks to @ptman! Bug fixes: * Fix caching error in the push evaluator (PR #2332) * Fix bug where pusherpool didn't start and broke some rooms (PR #2342) * Fix port script for user directory tables (PR #2375) * Fix device lists notifications when user rejoins a room (PR #2443, #2449) * Fix sync to always send down current state events in timeline (PR #2451) * Fix bug where guest users were incorrectly kicked (PR #2453) * Fix bug talking to IPv6 only servers using SRV records (PR #2462) Changes in synapse v0.22.1 (2017-07-06) ======================================= Bug fixes: * Fix bug where pusher pool didn't start and caused issues when interacting with some rooms (PR #2342) Changes in synapse v0.22.0 (2017-07-06) ======================================= No changes since v0.22.0-rc2 Changes in synapse v0.22.0-rc2 (2017-07-04) =========================================== Changes: * Improve performance of storing user IPs (PR #2307, #2308) * Slightly improve performance of verifying access tokens (PR #2320) * Slightly improve performance of event persistence (PR #2321) * Increase default cache factor size from 0.1 to 0.5 (PR #2330) Bug fixes: * Fix bug with storing registration sessions that caused frequent CPU churn (PR #2319) Changes in synapse v0.22.0-rc1 (2017-06-26) =========================================== Features: * Add a user directory API (PR #2252, and many more) * Add shutdown room API to remove room from local server (PR #2291) * Add API to quarantine media (PR #2292) * Add new config option to not send event contents to push servers (PR #2301) Thanks to @cjdelisle! Changes: * Various performance fixes (PR #2177, #2233, #2230, #2238, #2248, #2256, #2274) * Deduplicate sync filters (PR #2219) Thanks to @krombel! * Correct a typo in UPGRADE.rst (PR #2231) Thanks to @aaronraimist! * Add count of one time keys to sync stream (PR #2237) * Only store event_auth for state events (PR #2247) * Store URL cache preview downloads separately (PR #2299) Bug fixes: * Fix users not getting notifications when AS listened to that user_id (PR #2216) Thanks to @slipeer! * Fix users without push set up not getting notifications after joining rooms (PR #2236) * Fix preview url API to trim long descriptions (PR #2243) * Fix bug where we used cached but unpersisted state group as prev group, resulting in broken state of restart (PR #2263) * Fix removing of pushers when using workers (PR #2267) * Fix CORS headers to allow Authorization header (PR #2285) Thanks to @krombel! Changes in synapse v0.21.1 (2017-06-15) ======================================= Bug fixes: * Fix bug in anonymous usage statistic reporting (PR #2281) Changes in synapse v0.21.0 (2017-05-18) ======================================= No changes since v0.21.0-rc3 Changes in synapse v0.21.0-rc3 (2017-05-17) =========================================== Features: * Add per user rate-limiting overrides (PR #2208) * Add config option to limit maximum number of events requested by ``/sync`` and ``/messages`` (PR #2221) Thanks to @psaavedra! Changes: * Various small performance fixes (PR #2201, #2202, #2224, #2226, #2227, #2228, #2229) * Update username availability checker API (PR #2209, #2213) * When purging, don't de-delta state groups we're about to delete (PR #2214) * Documentation to check synapse version (PR #2215) Thanks to @hamber-dick! * Add an index to event_search to speed up purge history API (PR #2218) Bug fixes: * Fix API to allow clients to upload one-time-keys with new sigs (PR #2206) Changes in synapse v0.21.0-rc2 (2017-05-08) =========================================== Changes: * Always mark remotes as up if we receive a signed request from them (PR #2190) Bug fixes: * Fix bug where users got pushed for rooms they had muted (PR #2200) Changes in synapse v0.21.0-rc1 (2017-05-08) =========================================== Features: * Add username availability checker API (PR #2183) * Add read marker API (PR #2120) Changes: * Enable guest access for the 3pl/3pid APIs (PR #1986) * Add setting to support TURN for guests (PR #2011) * Various performance improvements (PR #2075, #2076, #2080, #2083, #2108, #2158, #2176, #2185) * Make synctl a bit more user friendly (PR #2078, #2127) Thanks @APwhitehat! * Replace HTTP replication with TCP replication (PR #2082, #2097, #2098, #2099, #2103, #2014, #2016, #2115, #2116, #2117) * Support authenticated SMTP (PR #2102) Thanks @DanielDent! * Add a counter metric for successfully-sent transactions (PR #2121) * Propagate errors sensibly from proxied IS requests (PR #2147) * Add more granular event send metrics (PR #2178) Bug fixes: * Fix nuke-room script to work with current schema (PR #1927) Thanks @zuckschwerdt! * Fix db port script to not assume postgres tables are in the public schema (PR #2024) Thanks @jerrykan! * Fix getting latest device IP for user with no devices (PR #2118) * Fix rejection of invites to unreachable servers (PR #2145) * Fix code for reporting old verify keys in synapse (PR #2156) * Fix invite state to always include all events (PR #2163) * Fix bug where synapse would always fetch state for any missing event (PR #2170) * Fix a leak with timed out HTTP connections (PR #2180) * Fix bug where we didn't time out HTTP requests to ASes (PR #2192) Docs: * Clarify doc for SQLite to PostgreSQL port (PR #1961) Thanks @benhylau! * Fix typo in synctl help (PR #2107) Thanks @HarHarLinks! * ``web_client_location`` documentation fix (PR #2131) Thanks @matthewjwolff! * Update README.rst with FreeBSD changes (PR #2132) Thanks @feld! * Clarify setting up metrics (PR #2149) Thanks @encks! Changes in synapse v0.20.0 (2017-04-11) ======================================= Bug fixes: * Fix joining rooms over federation where not all servers in the room saw the new server had joined (PR #2094) Changes in synapse v0.20.0-rc1 (2017-03-30) =========================================== Features: * Add delete_devices API (PR #1993) * Add phone number registration/login support (PR #1994, #2055) Changes: * Use JSONSchema for validation of filters. Thanks @pik! (PR #1783) * Reread log config on SIGHUP (PR #1982) * Speed up public room list (PR #1989) * Add helpful texts to logger config options (PR #1990) * Minor ``/sync`` performance improvements. (PR #2002, #2013, #2022) * Add some debug to help diagnose weird federation issue (PR #2035) * Correctly limit retries for all federation requests (PR #2050, #2061) * Don't lock table when persisting new one time keys (PR #2053) * Reduce some CPU work on DB threads (PR #2054) * Cache hosts in room (PR #2060) * Batch sending of device list pokes (PR #2063) * Speed up persist event path in certain edge cases (PR #2070) Bug fixes: * Fix bug where current_state_events renamed to current_state_ids (PR #1849) * Fix routing loop when fetching remote media (PR #1992) * Fix current_state_events table to not lie (PR #1996) * Fix CAS login to handle PartialDownloadError (PR #1997) * Fix assertion to stop transaction queue getting wedged (PR #2010) * Fix presence to fallback to last_active_ts if it beats the last sync time. Thanks @Half-Shot! (PR #2014) * Fix bug when federation received a PDU while a room join is in progress (PR #2016) * Fix resetting state on rejected events (PR #2025) * Fix installation issues in readme. Thanks @ricco386 (PR #2037) * Fix caching of remote servers' signature keys (PR #2042) * Fix some leaking log context (PR #2048, #2049, #2057, #2058) * Fix rejection of invites not reaching sync (PR #2056) Changes in synapse v0.19.3 (2017-03-20) ======================================= No changes since v0.19.3-rc2 Changes in synapse v0.19.3-rc2 (2017-03-13) =========================================== Bug fixes: * Fix bug in handling of incoming device list updates over federation. Changes in synapse v0.19.3-rc1 (2017-03-08) =========================================== Features: * Add some administration functionalities. Thanks to morteza-araby! (PR #1784) Changes: * Reduce database table sizes (PR #1873, #1916, #1923, #1963) * Update contrib/ to not use syutil. Thanks to andrewshadura! (PR #1907) * Don't fetch current state when sending an event in common case (PR #1955) Bug fixes: * Fix synapse_port_db failure. Thanks to Pneumaticat! (PR #1904) * Fix caching to not cache error responses (PR #1913) * Fix APIs to make kick & ban reasons work (PR #1917) * Fix bugs in the /keys/changes api (PR #1921) * Fix bug where users couldn't forget rooms they were banned from (PR #1922) * Fix issue with long language values in pushers API (PR #1925) * Fix a race in transaction queue (PR #1930) * Fix dynamic thumbnailing to preserve aspect ratio. Thanks to jkolo! (PR #1945) * Fix device list update to not constantly resync (PR #1964) * Fix potential for huge memory usage when getting device that have changed (PR #1969) Changes in synapse v0.19.2 (2017-02-20) ======================================= * Fix bug with event visibility check in /context/ API. Thanks to Tokodomo for pointing it out! (PR #1929) Changes in synapse v0.19.1 (2017-02-09) ======================================= * Fix bug where state was incorrectly reset in a room when synapse received an event over federation that did not pass auth checks (PR #1892) Changes in synapse v0.19.0 (2017-02-04) ======================================= No changes since RC 4. Changes in synapse v0.19.0-rc4 (2017-02-02) =========================================== * Bump cache sizes for common membership queries (PR #1879) Changes in synapse v0.19.0-rc3 (2017-02-02) =========================================== * Fix email push in pusher worker (PR #1875) * Make presence.get_new_events a bit faster (PR #1876) * Make /keys/changes a bit more performant (PR #1877) Changes in synapse v0.19.0-rc2 (2017-02-02) =========================================== * Include newly joined users in /keys/changes API (PR #1872) Changes in synapse v0.19.0-rc1 (2017-02-02) =========================================== Features: * Add support for specifying multiple bind addresses (PR #1709, #1712, #1795, #1835). Thanks to @kyrias! * Add /account/3pid/delete endpoint (PR #1714) * Add config option to configure the Riot URL used in notification emails (PR #1811). Thanks to @aperezdc! * Add username and password config options for turn server (PR #1832). Thanks to @xsteadfastx! * Implement device lists updates over federation (PR #1857, #1861, #1864) * Implement /keys/changes (PR #1869, #1872) Changes: * Improve IPv6 support (PR #1696). Thanks to @kyrias and @glyph! * Log which files we saved attachments to in the media_repository (PR #1791) * Linearize updates to membership via PUT /state/ to better handle multiple joins (PR #1787) * Limit number of entries to prefill from cache on startup (PR #1792) * Remove full_twisted_stacktraces option (PR #1802) * Measure size of some caches by sum of the size of cached values (PR #1815) * Measure metrics of string_cache (PR #1821) * Reduce logging verbosity (PR #1822, #1823, #1824) * Don't clobber a displayname or avatar_url if provided by an m.room.member event (PR #1852) * Better handle 401/404 response for federation /send/ (PR #1866, #1871) Fixes: * Fix ability to change password to a non-ascii one (PR #1711) * Fix push getting stuck due to looking at the wrong view of state (PR #1820) * Fix email address comparison to be case insensitive (PR #1827) * Fix occasional inconsistencies of room membership (PR #1836, #1840) Performance: * Don't block messages sending on bumping presence (PR #1789) * Change device_inbox stream index to include user (PR #1793) * Optimise state resolution (PR #1818) * Use DB cache of joined users for presence (PR #1862) * Add an index to make membership queries faster (PR #1867) Changes in synapse v0.18.7 (2017-01-09) ======================================= No changes from v0.18.7-rc2 Changes in synapse v0.18.7-rc2 (2017-01-07) =========================================== Bug fixes: * Fix error in rc1's discarding invalid inbound traffic logic that was incorrectly discarding missing events Changes in synapse v0.18.7-rc1 (2017-01-06) =========================================== Bug fixes: * Fix error in #PR 1764 to actually fix the nightmare #1753 bug. * Improve deadlock logging further * Discard inbound federation traffic from invalid domains, to immunise against #1753 Changes in synapse v0.18.6 (2017-01-06) ======================================= Bug fixes: * Fix bug when checking if a guest user is allowed to join a room (PR #1772) Thanks to Patrik Oldsberg for diagnosing and the fix! Changes in synapse v0.18.6-rc3 (2017-01-05) =========================================== Bug fixes: * Fix bug where we failed to send ban events to the banned server (PR #1758) * Fix bug where we sent event that didn't originate on this server to other servers (PR #1764) * Fix bug where processing an event from a remote server took a long time because we were making long HTTP requests (PR #1765, PR #1744) Changes: * Improve logging for debugging deadlocks (PR #1766, PR #1767) Changes in synapse v0.18.6-rc2 (2016-12-30) =========================================== Bug fixes: * Fix memory leak in twisted by initialising logging correctly (PR #1731) * Fix bug where fetching missing events took an unacceptable amount of time in large rooms (PR #1734) Changes in synapse v0.18.6-rc1 (2016-12-29) =========================================== Bug fixes: * Make sure that outbound connections are closed (PR #1725) Changes in synapse v0.18.5 (2016-12-16) ======================================= Bug fixes: * Fix federation /backfill returning events it shouldn't (PR #1700) * Fix crash in url preview (PR #1701) Changes in synapse v0.18.5-rc3 (2016-12-13) =========================================== Features: * Add support for E2E for guests (PR #1653) * Add new API appservice specific public room list (PR #1676) * Add new room membership APIs (PR #1680) Changes: * Enable guest access for private rooms by default (PR #653) * Limit the number of events that can be created on a given room concurrently (PR #1620) * Log the args that we have on UI auth completion (PR #1649) * Stop generating refresh_tokens (PR #1654) * Stop putting a time caveat on access tokens (PR #1656) * Remove unspecced GET endpoints for e2e keys (PR #1694) Bug fixes: * Fix handling of 500 and 429's over federation (PR #1650) * Fix Content-Type header parsing (PR #1660) * Fix error when previewing sites that include unicode, thanks to kyrias (PR #1664) * Fix some cases where we drop read receipts (PR #1678) * Fix bug where calls to ``/sync`` didn't correctly timeout (PR #1683) * Fix bug where E2E key query would fail if a single remote host failed (PR #1686) Changes in synapse v0.18.5-rc2 (2016-11-24) =========================================== Bug fixes: * Don't send old events over federation, fixes bug in -rc1. Changes in synapse v0.18.5-rc1 (2016-11-24) =========================================== Features: * Implement "event_fields" in filters (PR #1638) Changes: * Use external ldap auth pacakge (PR #1628) * Split out federation transaction sending to a worker (PR #1635) * Fail with a coherent error message if `/sync?filter=` is invalid (PR #1636) * More efficient notif count queries (PR #1644) Changes in synapse v0.18.4 (2016-11-22) ======================================= Bug fixes: * Add workaround for buggy clients that the fail to register (PR #1632) Changes in synapse v0.18.4-rc1 (2016-11-14) =========================================== Changes: * Various database efficiency improvements (PR #1188, #1192) * Update default config to blacklist more internal IPs, thanks to Euan Kemp (PR #1198) * Allow specifying duration in minutes in config, thanks to Daniel Dent (PR #1625) Bug fixes: * Fix media repo to set CORs headers on responses (PR #1190) * Fix registration to not error on non-ascii passwords (PR #1191) * Fix create event code to limit the number of prev_events (PR #1615) * Fix bug in transaction ID deduplication (PR #1624) Changes in synapse v0.18.3 (2016-11-08) ======================================= SECURITY UPDATE Explicitly require authentication when using LDAP3. This is the default on versions of ``ldap3`` above 1.0, but some distributions will package an older version. If you are using LDAP3 login and have a version of ``ldap3`` older than 1.0 it is **CRITICAL to updgrade**. Changes in synapse v0.18.2 (2016-11-01) ======================================= No changes since v0.18.2-rc5 Changes in synapse v0.18.2-rc5 (2016-10-28) =========================================== Bug fixes: * Fix prometheus process metrics in worker processes (PR #1184) Changes in synapse v0.18.2-rc4 (2016-10-27) =========================================== Bug fixes: * Fix ``user_threepids`` schema delta, which in some instances prevented startup after upgrade (PR #1183) Changes in synapse v0.18.2-rc3 (2016-10-27) =========================================== Changes: * Allow clients to supply access tokens as headers (PR #1098) * Clarify error codes for GET /filter/, thanks to Alexander Maznev (PR #1164) * Make password reset email field case insensitive (PR #1170) * Reduce redundant database work in email pusher (PR #1174) * Allow configurable rate limiting per AS (PR #1175) * Check whether to ratelimit sooner to avoid work (PR #1176) * Standardise prometheus metrics (PR #1177) Bug fixes: * Fix incredibly slow back pagination query (PR #1178) * Fix infinite typing bug (PR #1179) Changes in synapse v0.18.2-rc2 (2016-10-25) =========================================== (This release did not include the changes advertised and was identical to RC1) Changes in synapse v0.18.2-rc1 (2016-10-17) =========================================== Changes: * Remove redundant event_auth index (PR #1113) * Reduce DB hits for replication (PR #1141) * Implement pluggable password auth (PR #1155) * Remove rate limiting from app service senders and fix get_or_create_user requester, thanks to Patrik Oldsberg (PR #1157) * window.postmessage for Interactive Auth fallback (PR #1159) * Use sys.executable instead of hardcoded python, thanks to Pedro Larroy (PR #1162) * Add config option for adding additional TLS fingerprints (PR #1167) * User-interactive auth on delete device (PR #1168) Bug fixes: * Fix not being allowed to set your own state_key, thanks to Patrik Oldsberg (PR #1150) * Fix interactive auth to return 401 from for incorrect password (PR #1160, #1166) * Fix email push notifs being dropped (PR #1169) Changes in synapse v0.18.1 (2016-10-05) ====================================== No changes since v0.18.1-rc1 Changes in synapse v0.18.1-rc1 (2016-09-30) =========================================== Features: * Add total_room_count_estimate to ``/publicRooms`` (PR #1133) Changes: * Time out typing over federation (PR #1140) * Restructure LDAP authentication (PR #1153) Bug fixes: * Fix 3pid invites when server is already in the room (PR #1136) * Fix upgrading with SQLite taking lots of CPU for a few days after upgrade (PR #1144) * Fix upgrading from very old database versions (PR #1145) * Fix port script to work with recently added tables (PR #1146) Changes in synapse v0.18.0 (2016-09-19) ======================================= The release includes major changes to the state storage database schemas, which significantly reduce database size. Synapse will attempt to upgrade the current data in the background. Servers with large SQLite database may experience degradation of performance while this upgrade is in progress, therefore you may want to consider migrating to using Postgres before upgrading very large SQLite databases Changes: * Make public room search case insensitive (PR #1127) Bug fixes: * Fix and clean up publicRooms pagination (PR #1129) Changes in synapse v0.18.0-rc1 (2016-09-16) =========================================== Features: * Add ``only=highlight`` on ``/notifications`` (PR #1081) * Add server param to /publicRooms (PR #1082) * Allow clients to ask for the whole of a single state event (PR #1094) * Add is_direct param to /createRoom (PR #1108) * Add pagination support to publicRooms (PR #1121) * Add very basic filter API to /publicRooms (PR #1126) * Add basic direct to device messaging support for E2E (PR #1074, #1084, #1104, #1111) Changes: * Move to storing state_groups_state as deltas, greatly reducing DB size (PR #1065) * Reduce amount of state pulled out of the DB during common requests (PR #1069) * Allow PDF to be rendered from media repo (PR #1071) * Reindex state_groups_state after pruning (PR #1085) * Clobber EDUs in send queue (PR #1095) * Conform better to the CAS protocol specification (PR #1100) * Limit how often we ask for keys from dead servers (PR #1114) Bug fixes: * Fix /notifications API when used with ``from`` param (PR #1080) * Fix backfill when cannot find an event. (PR #1107) Changes in synapse v0.17.3 (2016-09-09) ======================================= This release fixes a major bug that stopped servers from handling rooms with over 1000 members. Changes in synapse v0.17.2 (2016-09-08) ======================================= This release contains security bug fixes. Please upgrade. No changes since v0.17.2-rc1 Changes in synapse v0.17.2-rc1 (2016-09-05) =========================================== Features: * Start adding store-and-forward direct-to-device messaging (PR #1046, #1050, #1062, #1066) Changes: * Avoid pulling the full state of a room out so often (PR #1047, #1049, #1063, #1068) * Don't notify for online to online presence transitions. (PR #1054) * Occasionally persist unpersisted presence updates (PR #1055) * Allow application services to have an optional 'url' (PR #1056) * Clean up old sent transactions from DB (PR #1059) Bug fixes: * Fix None check in backfill (PR #1043) * Fix membership changes to be idempotent (PR #1067) * Fix bug in get_pdu where it would sometimes return events with incorrect signature Changes in synapse v0.17.1 (2016-08-24) ======================================= Changes: * Delete old received_transactions rows (PR #1038) * Pass through user-supplied content in /join/$room_id (PR #1039) Bug fixes: * Fix bug with backfill (PR #1040) Changes in synapse v0.17.1-rc1 (2016-08-22) =========================================== Features: * Add notification API (PR #1028) Changes: * Don't print stack traces when failing to get remote keys (PR #996) * Various federation /event/ perf improvements (PR #998) * Only process one local membership event per room at a time (PR #1005) * Move default display name push rule (PR #1011, #1023) * Fix up preview URL API. Add tests. (PR #1015) * Set ``Content-Security-Policy`` on media repo (PR #1021) * Make notify_interested_services faster (PR #1022) * Add usage stats to prometheus monitoring (PR #1037) Bug fixes: * Fix token login (PR #993) * Fix CAS login (PR #994, #995) * Fix /sync to not clobber status_msg (PR #997) * Fix redacted state events to include prev_content (PR #1003) * Fix some bugs in the auth/ldap handler (PR #1007) * Fix backfill request to limit URI length, so that remotes don't reject the requests due to path length limits (PR #1012) * Fix AS push code to not send duplicate events (PR #1025) Changes in synapse v0.17.0 (2016-08-08) ======================================= This release contains significant security bug fixes regarding authenticating events received over federation. PLEASE UPGRADE. This release changes the LDAP configuration format in a backwards incompatible way, see PR #843 for details. Changes: * Add federation /version API (PR #990) * Make psutil dependency optional (PR #992) Bug fixes: * Fix URL preview API to exclude HTML comments in description (PR #988) * Fix error handling of remote joins (PR #991) Changes in synapse v0.17.0-rc4 (2016-08-05) =========================================== Changes: * Change the way we summarize URLs when previewing (PR #973) * Add new ``/state_ids/`` federation API (PR #979) * Speed up processing of ``/state/`` response (PR #986) Bug fixes: * Fix event persistence when event has already been partially persisted (PR #975, #983, #985) * Fix port script to also copy across backfilled events (PR #982) Changes in synapse v0.17.0-rc3 (2016-08-02) =========================================== Changes: * Forbid non-ASes from registering users whose names begin with '_' (PR #958) * Add some basic admin API docs (PR #963) Bug fixes: * Send the correct host header when fetching keys (PR #941) * Fix joining a room that has missing auth events (PR #964) * Fix various push bugs (PR #966, #970) * Fix adding emails on registration (PR #968) Changes in synapse v0.17.0-rc2 (2016-08-02) =========================================== (This release did not include the changes advertised and was identical to RC1) Changes in synapse v0.17.0-rc1 (2016-07-28) =========================================== This release changes the LDAP configuration format in a backwards incompatible way, see PR #843 for details. Features: * Add purge_media_cache admin API (PR #902) * Add deactivate account admin API (PR #903) * Add optional pepper to password hashing (PR #907, #910 by KentShikama) * Add an admin option to shared secret registration (breaks backwards compat) (PR #909) * Add purge local room history API (PR #911, #923, #924) * Add requestToken endpoints (PR #915) * Add an /account/deactivate endpoint (PR #921) * Add filter param to /messages. Add 'contains_url' to filter. (PR #922) * Add device_id support to /login (PR #929) * Add device_id support to /v2/register flow. (PR #937, #942) * Add GET /devices endpoint (PR #939, #944) * Add GET /device/{deviceId} (PR #943) * Add update and delete APIs for devices (PR #949) Changes: * Rewrite LDAP Authentication against ldap3 (PR #843 by mweinelt) * Linearize some federation endpoints based on (origin, room_id) (PR #879) * Remove the legacy v0 content upload API. (PR #888) * Use similar naming we use in email notifs for push (PR #894) * Optionally include password hash in createUser endpoint (PR #905 by KentShikama) * Use a query that postgresql optimises better for get_events_around (PR #906) * Fall back to 'username' if 'user' is not given for appservice registration. (PR #927 by Half-Shot) * Add metrics for psutil derived memory usage (PR #936) * Record device_id in client_ips (PR #938) * Send the correct host header when fetching keys (PR #941) * Log the hostname the reCAPTCHA was completed on (PR #946) * Make the device id on e2e key upload optional (PR #956) * Add r0.2.0 to the "supported versions" list (PR #960) * Don't include name of room for invites in push (PR #961) Bug fixes: * Fix substitution failure in mail template (PR #887) * Put most recent 20 messages in email notif (PR #892) * Ensure that the guest user is in the database when upgrading accounts (PR #914) * Fix various edge cases in auth handling (PR #919) * Fix 500 ISE when sending alias event without a state_key (PR #925) * Fix bug where we stored rejections in the state_group, persist all rejections (PR #948) * Fix lack of check of if the user is banned when handling 3pid invites (PR #952) * Fix a couple of bugs in the transaction and keyring code (PR #954, #955) Changes in synapse v0.16.1-r1 (2016-07-08) ========================================== THIS IS A CRITICAL SECURITY UPDATE. This fixes a bug which allowed users' accounts to be accessed by unauthorised users. Changes in synapse v0.16.1 (2016-06-20) ======================================= Bug fixes: * Fix assorted bugs in ``/preview_url`` (PR #872) * Fix TypeError when setting unicode passwords (PR #873) Performance improvements: * Turn ``use_frozen_events`` off by default (PR #877) * Disable responding with canonical json for federation (PR #878) Changes in synapse v0.16.1-rc1 (2016-06-15) =========================================== Features: None Changes: * Log requester for ``/publicRoom`` endpoints when possible (PR #856) * 502 on ``/thumbnail`` when can't connect to remote server (PR #862) * Linearize fetching of gaps on incoming events (PR #871) Bugs fixes: * Fix bug where rooms where marked as published by default (PR #857) * Fix bug where joining room with an event with invalid sender (PR #868) * Fix bug where backfilled events were sent down sync streams (PR #869) * Fix bug where outgoing connections could wedge indefinitely, causing push notifications to be unreliable (PR #870) Performance improvements: * Improve ``/publicRooms`` performance(PR #859) Changes in synapse v0.16.0 (2016-06-09) ======================================= NB: As of v0.14 all AS config files must have an ID field. Bug fixes: * Don't make rooms published by default (PR #857) Changes in synapse v0.16.0-rc2 (2016-06-08) =========================================== Features: * Add configuration option for tuning GC via ``gc.set_threshold`` (PR #849) Changes: * Record metrics about GC (PR #771, #847, #852) * Add metric counter for number of persisted events (PR #841) Bug fixes: * Fix 'From' header in email notifications (PR #843) * Fix presence where timeouts were not being fired for the first 8h after restarts (PR #842) * Fix bug where synapse sent malformed transactions to AS's when retrying transactions (Commits 310197b, 8437906) Performance improvements: * Remove event fetching from DB threads (PR #835) * Change the way we cache events (PR #836) * Add events to cache when we persist them (PR #840) Changes in synapse v0.16.0-rc1 (2016-06-03) =========================================== Version 0.15 was not released. See v0.15.0-rc1 below for additional changes. Features: * Add email notifications for missed messages (PR #759, #786, #799, #810, #815, #821) * Add a ``url_preview_ip_range_whitelist`` config param (PR #760) * Add /report endpoint (PR #762) * Add basic ignore user API (PR #763) * Add an openidish mechanism for proving that you own a given user_id (PR #765) * Allow clients to specify a server_name to avoid 'No known servers' (PR #794) * Add secondary_directory_servers option to fetch room list from other servers (PR #808, #813) Changes: * Report per request metrics for all of the things using request_handler (PR #756) * Correctly handle ``NULL`` password hashes from the database (PR #775) * Allow receipts for events we haven't seen in the db (PR #784) * Make synctl read a cache factor from config file (PR #785) * Increment badge count per missed convo, not per msg (PR #793) * Special case m.room.third_party_invite event auth to match invites (PR #814) Bug fixes: * Fix typo in event_auth servlet path (PR #757) * Fix password reset (PR #758) Performance improvements: * Reduce database inserts when sending transactions (PR #767) * Queue events by room for persistence (PR #768) * Add cache to ``get_user_by_id`` (PR #772) * Add and use ``get_domain_from_id`` (PR #773) * Use tree cache for ``get_linearized_receipts_for_room`` (PR #779) * Remove unused indices (PR #782) * Add caches to ``bulk_get_push_rules*`` (PR #804) * Cache ``get_event_reference_hashes`` (PR #806) * Add ``get_users_with_read_receipts_in_room`` cache (PR #809) * Use state to calculate ``get_users_in_room`` (PR #811) * Load push rules in storage layer so that they get cached (PR #825) * Make ``get_joined_hosts_for_room`` use get_users_in_room (PR #828) * Poke notifier on next reactor tick (PR #829) * Change CacheMetrics to be quicker (PR #830) Changes in synapse v0.15.0-rc1 (2016-04-26) =========================================== Features: * Add login support for Javascript Web Tokens, thanks to Niklas Riekenbrauck (PR #671,#687) * Add URL previewing support (PR #688) * Add login support for LDAP, thanks to Christoph Witzany (PR #701) * Add GET endpoint for pushers (PR #716) Changes: * Never notify for member events (PR #667) * Deduplicate identical ``/sync`` requests (PR #668) * Require user to have left room to forget room (PR #673) * Use DNS cache if within TTL (PR #677) * Let users see their own leave events (PR #699) * Deduplicate membership changes (PR #700) * Increase performance of pusher code (PR #705) * Respond with error status 504 if failed to talk to remote server (PR #731) * Increase search performance on postgres (PR #745) Bug fixes: * Fix bug where disabling all notifications still resulted in push (PR #678) * Fix bug where users couldn't reject remote invites if remote refused (PR #691) * Fix bug where synapse attempted to backfill from itself (PR #693) * Fix bug where profile information was not correctly added when joining remote rooms (PR #703) * Fix bug where register API required incorrect key name for AS registration (PR #727) Changes in synapse v0.14.0 (2016-03-30) ======================================= No changes from v0.14.0-rc2 Changes in synapse v0.14.0-rc2 (2016-03-23) =========================================== Features: * Add published room list API (PR #657) Changes: * Change various caches to consume less memory (PR #656, #658, #660, #662, #663, #665) * Allow rooms to be published without requiring an alias (PR #664) * Intern common strings in caches to reduce memory footprint (#666) Bug fixes: * Fix reject invites over federation (PR #646) * Fix bug where registration was not idempotent (PR #649) * Update aliases event after deleting aliases (PR #652) * Fix unread notification count, which was sometimes wrong (PR #661) Changes in synapse v0.14.0-rc1 (2016-03-14) =========================================== Features: * Add event_id to response to state event PUT (PR #581) * Allow guest users access to messages in rooms they have joined (PR #587) * Add config for what state is included in a room invite (PR #598) * Send the inviter's member event in room invite state (PR #607) * Add error codes for malformed/bad JSON in /login (PR #608) * Add support for changing the actions for default rules (PR #609) * Add environment variable SYNAPSE_CACHE_FACTOR, default it to 0.1 (PR #612) * Add ability for alias creators to delete aliases (PR #614) * Add profile information to invites (PR #624) Changes: * Enforce user_id exclusivity for AS registrations (PR #572) * Make adding push rules idempotent (PR #587) * Improve presence performance (PR #582, #586) * Change presence semantics for ``last_active_ago`` (PR #582, #586) * Don't allow ``m.room.create`` to be changed (PR #596) * Add 800x600 to default list of valid thumbnail sizes (PR #616) * Always include kicks and bans in full /sync (PR #625) * Send history visibility on boundary changes (PR #626) * Register endpoint now returns a refresh_token (PR #637) Bug fixes: * Fix bug where we returned incorrect state in /sync (PR #573) * Always return a JSON object from push rule API (PR #606) * Fix bug where registering without a user id sometimes failed (PR #610) * Report size of ExpiringCache in cache size metrics (PR #611) * Fix rejection of invites to empty rooms (PR #615) * Fix usage of ``bcrypt`` to not use ``checkpw`` (PR #619) * Pin ``pysaml2`` dependency (PR #634) * Fix bug in ``/sync`` where timeline order was incorrect for backfilled events (PR #635) Changes in synapse v0.13.3 (2016-02-11) ======================================= * Fix bug where ``/sync`` would occasionally return events in the wrong room. Changes in synapse v0.13.2 (2016-02-11) ======================================= * Fix bug where ``/events`` would fail to skip some events if there had been more events than the limit specified since the last request (PR #570) Changes in synapse v0.13.1 (2016-02-10) ======================================= * Bump matrix-angular-sdk (matrix web console) dependency to 0.6.8 to pull in the fix for SYWEB-361 so that the default client can display HTML messages again(!) Changes in synapse v0.13.0 (2016-02-10) ======================================= This version includes an upgrade of the schema, specifically adding an index to the ``events`` table. This may cause synapse to pause for several minutes the first time it is started after the upgrade. Changes: * Improve general performance (PR #540, #543. #544, #54, #549, #567) * Change guest user ids to be incrementing integers (PR #550) * Improve performance of public room list API (PR #552) * Change profile API to omit keys rather than return null (PR #557) * Add ``/media/r0`` endpoint prefix, which is equivalent to ``/media/v1/`` (PR #595) Bug fixes: * Fix bug with upgrading guest accounts where it would fail if you opened the registration email on a different device (PR #547) * Fix bug where unread count could be wrong (PR #568) Changes in synapse v0.12.1-rc1 (2016-01-29) =========================================== Features: * Add unread notification counts in ``/sync`` (PR #456) * Add support for inviting 3pids in ``/createRoom`` (PR #460) * Add ability for guest accounts to upgrade (PR #462) * Add ``/versions`` API (PR #468) * Add ``event`` to ``/context`` API (PR #492) * Add specific error code for invalid user names in ``/register`` (PR #499) * Add support for push badge counts (PR #507) * Add support for non-guest users to peek in rooms using ``/events`` (PR #510) Changes: * Change ``/sync`` so that guest users only get rooms they've joined (PR #469) * Change to require unbanning before other membership changes (PR #501) * Change default push rules to notify for all messages (PR #486) * Change default push rules to not notify on membership changes (PR #514) * Change default push rules in one to one rooms to only notify for events that are messages (PR #529) * Change ``/sync`` to reject requests with a ``from`` query param (PR #512) * Change server manhole to use SSH rather than telnet (PR #473) * Change server to require AS users to be registered before use (PR #487) * Change server not to start when ASes are invalidly configured (PR #494) * Change server to require ID and ``as_token`` to be unique for AS's (PR #496) * Change maximum pagination limit to 1000 (PR #497) Bug fixes: * Fix bug where ``/sync`` didn't return when something under the leave key changed (PR #461) * Fix bug where we returned smaller rather than larger than requested thumbnails when ``method=crop`` (PR #464) * Fix thumbnails API to only return cropped thumbnails when asking for a cropped thumbnail (PR #475) * Fix bug where we occasionally still logged access tokens (PR #477) * Fix bug where ``/events`` would always return immediately for guest users (PR #480) * Fix bug where ``/sync`` unexpectedly returned old left rooms (PR #481) * Fix enabling and disabling push rules (PR #498) * Fix bug where ``/register`` returned 500 when given unicode username (PR #513) Changes in synapse v0.12.0 (2016-01-04) ======================================= * Expose ``/login`` under ``r0`` (PR #459) Changes in synapse v0.12.0-rc3 (2015-12-23) =========================================== * Allow guest accounts access to ``/sync`` (PR #455) * Allow filters to include/exclude rooms at the room level rather than just from the components of the sync for each room. (PR #454) * Include urls for room avatars in the response to ``/publicRooms`` (PR #453) * Don't set a identicon as the avatar for a user when they register (PR #450) * Add a ``display_name`` to third-party invites (PR #449) * Send more information to the identity server for third-party invites so that it can send richer messages to the invitee (PR #446) * Cache the responses to ``/initialSync`` for 5 minutes. If a client retries a request to ``/initialSync`` before the a response was computed to the first request then the same response is used for both requests (PR #457) * Fix a bug where synapse would always request the signing keys of remote servers even when the key was cached locally (PR #452) * Fix 500 when pagination search results (PR #447) * Fix a bug where synapse was leaking raw email address in third-party invites (PR #448) Changes in synapse v0.12.0-rc2 (2015-12-14) =========================================== * Add caches for whether rooms have been forgotten by a user (PR #434) * Remove instructions to use ``--process-dependency-link`` since all of the dependencies of synapse are on PyPI (PR #436) * Parallelise the processing of ``/sync`` requests (PR #437) * Fix race updating presence in ``/events`` (PR #444) * Fix bug back-populating search results (PR #441) * Fix bug calculating state in ``/sync`` requests (PR #442) Changes in synapse v0.12.0-rc1 (2015-12-10) =========================================== * Host the client APIs released as r0 by https://matrix.org/docs/spec/r0.0.0/client_server.html on paths prefixed by ``/_matrix/client/r0``. (PR #430, PR #415, PR #400) * Updates the client APIs to match r0 of the matrix specification. * All APIs return events in the new event format, old APIs also include the fields needed to parse the event using the old format for compatibility. (PR #402) * Search results are now given as a JSON array rather than a JSON object (PR #405) * Miscellaneous changes to search (PR #403, PR #406, PR #412) * Filter JSON objects may now be passed as query parameters to ``/sync`` (PR #431) * Fix implementation of ``/admin/whois`` (PR #418) * Only include the rooms that user has left in ``/sync`` if the client requests them in the filter (PR #423) * Don't push for ``m.room.message`` by default (PR #411) * Add API for setting per account user data (PR #392) * Allow users to forget rooms (PR #385) * Performance improvements and monitoring: * Add per-request counters for CPU time spent on the main python thread. (PR #421, PR #420) * Add per-request counters for time spent in the database (PR #429) * Make state updates in the C+S API idempotent (PR #416) * Only fire ``user_joined_room`` if the user has actually joined. (PR #410) * Reuse a single http client, rather than creating new ones (PR #413) * Fixed a bug upgrading from older versions of synapse on postgresql (PR #417) Changes in synapse v0.11.1 (2015-11-20) ======================================= * Add extra options to search API (PR #394) * Fix bug where we did not correctly cap federation retry timers. This meant it could take several hours for servers to start talking to ressurected servers, even when they were receiving traffic from them (PR #393) * Don't advertise login token flow unless CAS is enabled. This caused issues where some clients would always use the fallback API if they did not recognize all login flows (PR #391) * Change /v2 sync API to rename ``private_user_data`` to ``account_data`` (PR #386) * Change /v2 sync API to remove the ``event_map`` and rename keys in ``rooms`` object (PR #389) Changes in synapse v0.11.0-r2 (2015-11-19) ========================================== * Fix bug in database port script (PR #387) Changes in synapse v0.11.0-r1 (2015-11-18) ========================================== * Retry and fail federation requests more aggressively for requests that block client side requests (PR #384) Changes in synapse v0.11.0 (2015-11-17) ======================================= * Change CAS login API (PR #349) Changes in synapse v0.11.0-rc2 (2015-11-13) =========================================== * Various changes to /sync API response format (PR #373) * Fix regression when setting display name in newly joined room over federation (PR #368) * Fix problem where /search was slow when using SQLite (PR #366) Changes in synapse v0.11.0-rc1 (2015-11-11) =========================================== * Add Search API (PR #307, #324, #327, #336, #350, #359) * Add 'archived' state to v2 /sync API (PR #316) * Add ability to reject invites (PR #317) * Add config option to disable password login (PR #322) * Add the login fallback API (PR #330) * Add room context API (PR #334) * Add room tagging support (PR #335) * Update v2 /sync API to match spec (PR #305, #316, #321, #332, #337, #341) * Change retry schedule for application services (PR #320) * Change retry schedule for remote servers (PR #340) * Fix bug where we hosted static content in the incorrect place (PR #329) * Fix bug where we didn't increment retry interval for remote servers (PR #343) Changes in synapse v0.10.1-rc1 (2015-10-15) =========================================== * Add support for CAS, thanks to Steven Hammerton (PR #295, #296) * Add support for using macaroons for ``access_token`` (PR #256, #229) * Add support for ``m.room.canonical_alias`` (PR #287) * Add support for viewing the history of rooms that they have left. (PR #276, #294) * Add support for refresh tokens (PR #240) * Add flag on creation which disables federation of the room (PR #279) * Add some room state to invites. (PR #275) * Atomically persist events when joining a room over federation (PR #283) * Change default history visibility for private rooms (PR #271) * Allow users to redact their own sent events (PR #262) * Use tox for tests (PR #247) * Split up syutil into separate libraries (PR #243) Changes in synapse v0.10.0-r2 (2015-09-16) ========================================== * Fix bug where we always fetched remote server signing keys instead of using ones in our cache. * Fix adding threepids to an existing account. * Fix bug with invinting over federation where remote server was already in the room. (PR #281, SYN-392) Changes in synapse v0.10.0-r1 (2015-09-08) ========================================== * Fix bug with python packaging Changes in synapse v0.10.0 (2015-09-03) ======================================= No change from release candidate. Changes in synapse v0.10.0-rc6 (2015-09-02) =========================================== * Remove some of the old database upgrade scripts. * Fix database port script to work with newly created sqlite databases. Changes in synapse v0.10.0-rc5 (2015-08-27) =========================================== * Fix bug that broke downloading files with ascii filenames across federation. Changes in synapse v0.10.0-rc4 (2015-08-27) =========================================== * Allow UTF-8 filenames for upload. (PR #259) Changes in synapse v0.10.0-rc3 (2015-08-25) =========================================== * Add ``--keys-directory`` config option to specify where files such as certs and signing keys should be stored in, when using ``--generate-config`` or ``--generate-keys``. (PR #250) * Allow ``--config-path`` to specify a directory, causing synapse to use all \*.yaml files in the directory as config files. (PR #249) * Add ``web_client_location`` config option to specify static files to be hosted by synapse under ``/_matrix/client``. (PR #245) * Add helper utility to synapse to read and parse the config files and extract the value of a given key. For example:: $ python -m synapse.config read server_name -c homeserver.yaml localhost (PR #246) Changes in synapse v0.10.0-rc2 (2015-08-24) =========================================== * Fix bug where we incorrectly populated the ``event_forward_extremities`` table, resulting in problems joining large remote rooms (e.g. ``#matrix:matrix.org``) * Reduce the number of times we wake up pushers by not listening for presence or typing events, reducing the CPU cost of each pusher. Changes in synapse v0.10.0-rc1 (2015-08-21) =========================================== Also see v0.9.4-rc1 changelog, which has been amalgamated into this release. General: * Upgrade to Twisted 15 (PR #173) * Add support for serving and fetching encryption keys over federation. (PR #208) * Add support for logging in with email address (PR #234) * Add support for new ``m.room.canonical_alias`` event. (PR #233) * Change synapse to treat user IDs case insensitively during registration and login. (If two users already exist with case insensitive matching user ids, synapse will continue to require them to specify their user ids exactly.) * Error if a user tries to register with an email already in use. (PR #211) * Add extra and improve existing caches (PR #212, #219, #226, #228) * Batch various storage request (PR #226, #228) * Fix bug where we didn't correctly log the entity that triggered the request if the request came in via an application service (PR #230) * Fix bug where we needlessly regenerated the full list of rooms an AS is interested in. (PR #232) * Add support for AS's to use v2_alpha registration API (PR #210) Configuration: * Add ``--generate-keys`` that will generate any missing cert and key files in the configuration files. This is equivalent to running ``--generate-config`` on an existing configuration file. (PR #220) * ``--generate-config`` now no longer requires a ``--server-name`` parameter when used on existing configuration files. (PR #220) * Add ``--print-pidfile`` flag that controls the printing of the pid to stdout of the demonised process. (PR #213) Media Repository: * Fix bug where we picked a lower resolution image than requested. (PR #205) * Add support for specifying if a the media repository should dynamically thumbnail images or not. (PR #206) Metrics: * Add statistics from the reactor to the metrics API. (PR #224, #225) Demo Homeservers: * Fix starting the demo homeservers without rate-limiting enabled. (PR #182) * Fix enabling registration on demo homeservers (PR #223) Changes in synapse v0.9.4-rc1 (2015-07-21) ========================================== General: * Add basic implementation of receipts. (SPEC-99) * Add support for configuration presets in room creation API. (PR #203) * Add auth event that limits the visibility of history for new users. (SPEC-134) * Add SAML2 login/registration support. (PR #201. Thanks Muthu Subramanian!) * Add client side key management APIs for end to end encryption. (PR #198) * Change power level semantics so that you cannot kick, ban or change power levels of users that have equal or greater power level than you. (SYN-192) * Improve performance by bulk inserting events where possible. (PR #193) * Improve performance by bulk verifying signatures where possible. (PR #194) Configuration: * Add support for including TLS certificate chains. Media Repository: * Add Content-Disposition headers to content repository responses. (SYN-150) Changes in synapse v0.9.3 (2015-07-01) ====================================== No changes from v0.9.3 Release Candidate 1. Changes in synapse v0.9.3-rc1 (2015-06-23) ========================================== General: * Fix a memory leak in the notifier. (SYN-412) * Improve performance of room initial sync. (SYN-418) * General improvements to logging. * Remove ``access_token`` query params from ``INFO`` level logging. Configuration: * Add support for specifying and configuring multiple listeners. (SYN-389) Application services: * Fix bug where synapse failed to send user queries to application services. Changes in synapse v0.9.2-r2 (2015-06-15) ========================================= Fix packaging so that schema delta python files get included in the package. Changes in synapse v0.9.2 (2015-06-12) ====================================== General: * Use ultrajson for json (de)serialisation when a canonical encoding is not required. Ultrajson is significantly faster than simplejson in certain circumstances. * Use connection pools for outgoing HTTP connections. * Process thumbnails on separate threads. Configuration: * Add option, ``gzip_responses``, to disable HTTP response compression. Federation: * Improve resilience of backfill by ensuring we fetch any missing auth events. * Improve performance of backfill and joining remote rooms by removing unnecessary computations. This included handling events we'd previously handled as well as attempting to compute the current state for outliers. Changes in synapse v0.9.1 (2015-05-26) ====================================== General: * Add support for backfilling when a client paginates. This allows servers to request history for a room from remote servers when a client tries to paginate history the server does not have - SYN-36 * Fix bug where you couldn't disable non-default pushrules - SYN-378 * Fix ``register_new_user`` script - SYN-359 * Improve performance of fetching events from the database, this improves both initialSync and sending of events. * Improve performance of event streams, allowing synapse to handle more simultaneous connected clients. Federation: * Fix bug with existing backfill implementation where it returned the wrong selection of events in some circumstances. * Improve performance of joining remote rooms. Configuration: * Add support for changing the bind host of the metrics listener via the ``metrics_bind_host`` option. Changes in synapse v0.9.0-r5 (2015-05-21) ========================================= * Add more database caches to reduce amount of work done for each pusher. This radically reduces CPU usage when multiple pushers are set up in the same room. Changes in synapse v0.9.0 (2015-05-07) ====================================== General: * Add support for using a PostgreSQL database instead of SQLite. See `docs/postgres.rst`_ for details. * Add password change and reset APIs. See `Registration`_ in the spec. * Fix memory leak due to not releasing stale notifiers - SYN-339. * Fix race in caches that occasionally caused some presence updates to be dropped - SYN-369. * Check server name has not changed on restart. * Add a sample systemd unit file and a logger configuration in contrib/systemd. Contributed Ivan Shapovalov. Federation: * Add key distribution mechanisms for fetching public keys of unavailable remote home servers. See `Retrieving Server Keys`_ in the spec. Configuration: * Add support for multiple config files. * Add support for dictionaries in config files. * Remove support for specifying config options on the command line, except for: * ``--daemonize`` - Daemonize the home server. * ``--manhole`` - Turn on the twisted telnet manhole service on the given port. * ``--database-path`` - The path to a sqlite database to use. * ``--verbose`` - The verbosity level. * ``--log-file`` - File to log to. * ``--log-config`` - Python logging config file. * ``--enable-registration`` - Enable registration for new users. Application services: * Reliably retry sending of events from Synapse to application services, as per `Application Services`_ spec. * Application services can no longer register via the ``/register`` API, instead their configuration should be saved to a file and listed in the synapse ``app_service_config_files`` config option. The AS configuration file has the same format as the old ``/register`` request. See `docs/application_services.rst`_ for more information. .. _`docs/postgres.rst`: docs/postgres.rst .. _`docs/application_services.rst`: docs/application_services.rst .. _`Registration`: https://github.com/matrix-org/matrix-doc/blob/master/specification/10_client_server_api.rst#registration .. _`Retrieving Server Keys`: https://github.com/matrix-org/matrix-doc/blob/6f2698/specification/30_server_server_api.rst#retrieving-server-keys .. _`Application Services`: https://github.com/matrix-org/matrix-doc/blob/0c6bd9/specification/25_application_service_api.rst#home-server---application-service-api Changes in synapse v0.8.1 (2015-03-18) ====================================== * Disable registration by default. New users can be added using the command ``register_new_matrix_user`` or by enabling registration in the config. * Add metrics to synapse. To enable metrics use config options ``enable_metrics`` and ``metrics_port``. * Fix bug where banning only kicked the user. Changes in synapse v0.8.0 (2015-03-06) ====================================== General: * Add support for registration fallback. This is a page hosted on the server which allows a user to register for an account, regardless of what client they are using (e.g. mobile devices). * Added new default push rules and made them configurable by clients: * Suppress all notice messages. * Notify when invited to a new room. * Notify for messages that don't match any rule. * Notify on incoming call. Federation: * Added per host server side rate-limiting of incoming federation requests. * Added a ``/get_missing_events/`` API to federation to reduce number of ``/events/`` requests. Configuration: * Added configuration option to disable registration: ``disable_registration``. * Added configuration option to change soft limit of number of open file descriptors: ``soft_file_limit``. * Make ``tls_private_key_path`` optional when running with ``no_tls``. Application services: * Application services can now poll on the CS API ``/events`` for their events, by providing their application service ``access_token``. * Added exclusive namespace support to application services API. Changes in synapse v0.7.1 (2015-02-19) ====================================== * Initial alpha implementation of parts of the Application Services API. Including: - AS Registration / Unregistration - User Query API - Room Alias Query API - Push transport for receiving events. - User/Alias namespace admin control * Add cache when fetching events from remote servers to stop repeatedly fetching events with bad signatures. * Respect the per remote server retry scheme when fetching both events and server keys to reduce the number of times we send requests to dead servers. * Inform remote servers when the local server fails to handle a received event. * Turn off python bytecode generation due to problems experienced when upgrading from previous versions. Changes in synapse v0.7.0 (2015-02-12) ====================================== * Add initial implementation of the query auth federation API, allowing servers to agree on whether an event should be allowed or rejected. * Persist events we have rejected from federation, fixing the bug where servers would keep requesting the same events. * Various federation performance improvements, including: - Add in memory caches on queries such as: * Computing the state of a room at a point in time, used for authorization on federation requests. * Fetching events from the database. * User's room membership, used for authorizing presence updates. - Upgraded JSON library to improve parsing and serialisation speeds. * Add default avatars to new user accounts using pydenticon library. * Correctly time out federation requests. * Retry federation requests against different servers. * Add support for push and push rules. * Add alpha versions of proposed new CSv2 APIs, including ``/sync`` API. Changes in synapse 0.6.1 (2015-01-07) ===================================== * Major optimizations to improve performance of initial sync and event sending in large rooms (by up to 10x) * Media repository now includes a Content-Length header on media downloads. * Improve quality of thumbnails by changing resizing algorithm. Changes in synapse 0.6.0 (2014-12-16) ===================================== * Add new API for media upload and download that supports thumbnailing. * Replicate media uploads over multiple homeservers so media is always served to clients from their local homeserver. This obsoletes the --content-addr parameter and confusion over accessing content directly from remote homeservers. * Implement exponential backoff when retrying federation requests when sending to remote homeservers which are offline. * Implement typing notifications. * Fix bugs where we sent events with invalid signatures due to bugs where we incorrectly persisted events. * Improve performance of database queries involving retrieving events. Changes in synapse 0.5.4a (2014-12-13) ====================================== * Fix bug while generating the error message when a file path specified in the config doesn't exist. Changes in synapse 0.5.4 (2014-12-03) ===================================== * Fix presence bug where some rooms did not display presence updates for remote users. * Do not log SQL timing log lines when started with "-v" * Fix potential memory leak. Changes in synapse 0.5.3c (2014-12-02) ====================================== * Change the default value for the `content_addr` option to use the HTTP listener, as by default the HTTPS listener will be using a self-signed certificate. Changes in synapse 0.5.3 (2014-11-27) ===================================== * Fix bug that caused joining a remote room to fail if a single event was not signed correctly. * Fix bug which caused servers to continuously try and fetch events from other servers. Changes in synapse 0.5.2 (2014-11-26) ===================================== Fix major bug that caused rooms to disappear from peoples initial sync. Changes in synapse 0.5.1 (2014-11-26) ===================================== See UPGRADES.rst for specific instructions on how to upgrade. * Fix bug where we served up an Event that did not match its signatures. * Fix regression where we no longer correctly handled the case where a homeserver receives an event for a room it doesn't recognise (but is in.) Changes in synapse 0.5.0 (2014-11-19) ===================================== This release includes changes to the federation protocol and client-server API that is not backwards compatible. This release also changes the internal database schemas and so requires servers to drop their current history. See UPGRADES.rst for details. Homeserver: * Add authentication and authorization to the federation protocol. Events are now signed by their originating homeservers. * Implement the new authorization model for rooms. * Split out web client into a seperate repository: matrix-angular-sdk. * Change the structure of PDUs. * Fix bug where user could not join rooms via an alias containing 4-byte UTF-8 characters. * Merge concept of PDUs and Events internally. * Improve logging by adding request ids to log lines. * Implement a very basic room initial sync API. * Implement the new invite/join federation APIs. Webclient: * The webclient has been moved to a seperate repository. Changes in synapse 0.4.2 (2014-10-31) ===================================== Homeserver: * Fix bugs where we did not notify users of correct presence updates. * Fix bug where we did not handle sub second event stream timeouts. Webclient: * Add ability to click on messages to see JSON. * Add ability to redact messages. * Add ability to view and edit all room state JSON. * Handle incoming redactions. * Improve feedback on errors. * Fix bugs in mobile CSS. * Fix bugs with desktop notifications. Changes in synapse 0.4.1 (2014-10-17) ===================================== Webclient: * Fix bug with display of timestamps. Changes in synpase 0.4.0 (2014-10-17) ===================================== This release includes changes to the federation protocol and client-server API that is not backwards compatible. The Matrix specification has been moved to a separate git repository: http://github.com/matrix-org/matrix-doc You will also need an updated syutil and config. See UPGRADES.rst. Homeserver: * Sign federation transactions to assert strong identity over federation. * Rename timestamp keys in PDUs and events from 'ts' and 'hsob_ts' to 'origin_server_ts'. Changes in synapse 0.3.4 (2014-09-25) ===================================== This version adds support for using a TURN server. See docs/turn-howto.rst on how to set one up. Homeserver: * Add support for redaction of messages. * Fix bug where inviting a user on a remote home server could take up to 20-30s. * Implement a get current room state API. * Add support specifying and retrieving turn server configuration. Webclient: * Add button to send messages to users from the home page. * Add support for using TURN for VoIP calls. * Show display name change messages. * Fix bug where the client didn't get the state of a newly joined room until after it has been refreshed. * Fix bugs with tab complete. * Fix bug where holding down the down arrow caused chrome to chew 100% CPU. * Fix bug where desktop notifications occasionally used "Undefined" as the display name. * Fix more places where we sometimes saw room IDs incorrectly. * Fix bug which caused lag when entering text in the text box. Changes in synapse 0.3.3 (2014-09-22) ===================================== Homeserver: * Fix bug where you continued to get events for rooms you had left. Webclient: * Add support for video calls with basic UI. * Fix bug where one to one chats were named after your display name rather than the other person's. * Fix bug which caused lag when typing in the textarea. * Refuse to run on browsers we know won't work. * Trigger pagination when joining new rooms. * Fix bug where we sometimes didn't display invitations in recents. * Automatically join room when accepting a VoIP call. * Disable outgoing and reject incoming calls on browsers we don't support VoIP in. * Don't display desktop notifications for messages in the room you are non-idle and speaking in. Changes in synapse 0.3.2 (2014-09-18) ===================================== Webclient: * Fix bug where an empty "bing words" list in old accounts didn't send notifications when it should have done. Changes in synapse 0.3.1 (2014-09-18) ===================================== This is a release to hotfix v0.3.0 to fix two regressions. Webclient: * Fix a regression where we sometimes displayed duplicate events. * Fix a regression where we didn't immediately remove rooms you were banned in from the recents list. Changes in synapse 0.3.0 (2014-09-18) ===================================== See UPGRADE for information about changes to the client server API, including breaking backwards compatibility with VoIP calls and registration API. Homeserver: * When a user changes their displayname or avatar the server will now update all their join states to reflect this. * The server now adds "age" key to events to indicate how old they are. This is clock independent, so at no point does any server or webclient have to assume their clock is in sync with everyone else. * Fix bug where we didn't correctly pull in missing PDUs. * Fix bug where prev_content key wasn't always returned. * Add support for password resets. Webclient: * Improve page content loading. * Join/parts now trigger desktop notifications. * Always show room aliases in the UI if one is present. * No longer show user-count in the recents side panel. * Add up & down arrow support to the text box for message sending to step through your sent history. * Don't display notifications for our own messages. * Emotes are now formatted correctly in desktop notifications. * The recents list now differentiates between public & private rooms. * Fix bug where when switching between rooms the pagination flickered before the view jumped to the bottom of the screen. * Add bing word support. Registration API: * The registration API has been overhauled to function like the login API. In practice, this means registration requests must now include the following: 'type':'m.login.password'. See UPGRADE for more information on this. * The 'user_id' key has been renamed to 'user' to better match the login API. * There is an additional login type: 'm.login.email.identity'. * The command client and web client have been updated to reflect these changes. Changes in synapse 0.2.3 (2014-09-12) ===================================== Homeserver: * Fix bug where we stopped sending events to remote home servers if a user from that home server left, even if there were some still in the room. * Fix bugs in the state conflict resolution where it was incorrectly rejecting events. Webclient: * Display room names and topics. * Allow setting/editing of room names and topics. * Display information about rooms on the main page. * Handle ban and kick events in real time. * VoIP UI and reliability improvements. * Add glare support for VoIP. * Improvements to initial startup speed. * Don't display duplicate join events. * Local echo of messages. * Differentiate sending and sent of local echo. * Various minor bug fixes. Changes in synapse 0.2.2 (2014-09-06) ===================================== Homeserver: * When the server returns state events it now also includes the previous content. * Add support for inviting people when creating a new room. * Make the homeserver inform the room via `m.room.aliases` when a new alias is added for a room. * Validate `m.room.power_level` events. Webclient: * Add support for captchas on registration. * Handle `m.room.aliases` events. * Asynchronously send messages and show a local echo. * Inform the UI when a message failed to send. * Only autoscroll on receiving a new message if the user was already at the bottom of the screen. * Add support for ban/kick reasons. Changes in synapse 0.2.1 (2014-09-03) ===================================== Homeserver: * Added support for signing up with a third party id. * Add synctl scripts. * Added rate limiting. * Add option to change the external address the content repo uses. * Presence bug fixes. Webclient: * Added support for signing up with a third party id. * Added support for banning and kicking users. * Added support for displaying and setting ops. * Added support for room names. * Fix bugs with room membership event display. Changes in synapse 0.2.0 (2014-09-02) ===================================== This update changes many configuration options, updates the database schema and mandates SSL for server-server connections. Homeserver: * Require SSL for server-server connections. * Add SSL listener for client-server connections. * Add ability to use config files. * Add support for kicking/banning and power levels. * Allow setting of room names and topics on creation. * Change presence to include last seen time of the user. * Change url path prefix to /_matrix/... * Bug fixes to presence. Webclient: * Reskin the CSS for registration and login. * Various improvements to rooms CSS. * Support changes in client-server API. * Bug fixes to VOIP UI. * Various bug fixes to handling of changes to room member list. Changes in synapse 0.1.2 (2014-08-29) ===================================== Webclient: * Add basic call state UI for VoIP calls. Changes in synapse 0.1.1 (2014-08-29) ===================================== Homeserver: * Fix bug that caused the event stream to not notify some clients about changes. Changes in synapse 0.1.0 (2014-08-29) ===================================== Presence has been reenabled in this release. Homeserver: * Update client to server API, including: - Use a more consistent url scheme. - Provide more useful information in the initial sync api. * Change the presence handling to be much more efficient. * Change the presence server to server API to not require explicit polling of all users who share a room with a user. * Fix races in the event streaming logic. Webclient: * Update to use new client to server API. * Add basic VOIP support. * Add idle timers that change your status to away. * Add recent rooms column when viewing a room. * Various network efficiency improvements. * Add basic mobile browser support. * Add a settings page. Changes in synapse 0.0.1 (2014-08-22) ===================================== Presence has been disabled in this release due to a bug that caused the homeserver to spam other remote homeservers. Homeserver: * Completely change the database schema to support generic event types. * Improve presence reliability. * Improve reliability of joining remote rooms. * Fix bug where room join events were duplicated. * Improve initial sync API to return more information to the client. * Stop generating fake messages for room membership events. Webclient: * Add tab completion of names. * Add ability to upload and send images. * Add profile pages. * Improve CSS layout of room. * Disambiguate identical display names. * Don't get remote users display names and avatars individually. * Use the new initial sync API to reduce number of round trips to the homeserver. * Change url scheme to use room aliases instead of room ids where known. * Increase longpoll timeout. Changes in synapse 0.0.0 (2014-08-13) ===================================== * Initial alpha release synapse-0.24.0/CONTRIBUTING.rst000066400000000000000000000133361317335640100157310ustar00rootroot00000000000000Contributing code to Matrix =========================== Everyone is welcome to contribute code to Matrix (https://github.com/matrix-org), provided that they are willing to license their contributions under the same license as the project itself. We follow a simple 'inbound=outbound' model for contributions: the act of submitting an 'inbound' contribution means that the contributor agrees to license the code under the same terms as the project's overall 'outbound' license - in our case, this is almost always Apache Software License v2 (see LICENSE). How to contribute ~~~~~~~~~~~~~~~~~ The preferred and easiest way to contribute changes to Matrix is to fork the relevant project on github, and then create a pull request to ask us to pull your changes into our repo (https://help.github.com/articles/using-pull-requests/) **The single biggest thing you need to know is: please base your changes on the develop branch - /not/ master.** We use the master branch to track the most recent release, so that folks who blindly clone the repo and automatically check out master get something that works. Develop is the unstable branch where all the development actually happens: the workflow is that contributors should fork the develop branch to make a 'feature' branch for a particular contribution, and then make a pull request to merge this back into the matrix.org 'official' develop branch. We use github's pull request workflow to review the contribution, and either ask you to make any refinements needed or merge it and make them ourselves. The changes will then land on master when we next do a release. We use Jenkins for continuous integration (http://matrix.org/jenkins), and typically all pull requests get automatically tested Jenkins: if your change breaks the build, Jenkins will yell about it in #matrix-dev:matrix.org so please lurk there and keep an eye open. Code style ~~~~~~~~~~ All Matrix projects have a well-defined code-style - and sometimes we've even got as far as documenting it... For instance, synapse's code style doc lives at https://github.com/matrix-org/synapse/tree/master/docs/code_style.rst. Please ensure your changes match the cosmetic style of the existing project, and **never** mix cosmetic and functional changes in the same commit, as it makes it horribly hard to review otherwise. Attribution ~~~~~~~~~~~ Everyone who contributes anything to Matrix is welcome to be listed in the AUTHORS.rst file for the project in question. Please feel free to include a change to AUTHORS.rst in your pull request to list yourself and a short description of the area(s) you've worked on. Also, we sometimes have swag to give away to contributors - if you feel that Matrix-branded apparel is missing from your life, please mail us your shipping address to matrix at matrix.org and we'll try to fix it :) Sign off ~~~~~~~~ In order to have a concrete record that your contribution is intentional and you agree to license it under the same terms as the project's license, we've adopted the same lightweight approach that the Linux Kernel (https://www.kernel.org/doc/Documentation/SubmittingPatches), Docker (https://github.com/docker/docker/blob/master/CONTRIBUTING.md), and many other projects use: the DCO (Developer Certificate of Origin: http://developercertificate.org/). This is a simple declaration that you wrote the contribution or otherwise have the right to contribute it to Matrix:: Developer Certificate of Origin Version 1.1 Copyright (C) 2004, 2006 The Linux Foundation and its contributors. 660 York Street, Suite 102, San Francisco, CA 94110 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. If you agree to this for your contribution, then all that's needed is to include the line in your commit or pull request comment:: Signed-off-by: Your Name ...using your real name; unfortunately pseudonyms and anonymous contributions can't be accepted. Git makes this trivial - just use the -s flag when you do ``git commit``, having first set ``user.name`` and ``user.email`` git configs (which you should have done anyway :) Conclusion ~~~~~~~~~~ That's it! Matrix is a very open and collaborative project as you might expect given our obsession with open communication. If we're going to successfully matrix together all the fragmented communication technologies out there we are reliant on contributions and collaboration from the community to do so. So please get involved - and we hope you have as much fun hacking on Matrix as we do!synapse-0.24.0/LICENSE000066400000000000000000000236761317335640100143050ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS synapse-0.24.0/MANIFEST.in000066400000000000000000000012521317335640100150200ustar00rootroot00000000000000include synctl include LICENSE include VERSION include *.rst include demo/README include demo/demo.tls.dh include demo/*.py include demo/*.sh recursive-include synapse/storage/schema *.sql recursive-include synapse/storage/schema *.py recursive-include docs * recursive-include res * recursive-include scripts * recursive-include scripts-dev * recursive-include synapse *.pyi recursive-include tests *.py recursive-include synapse/static *.css recursive-include synapse/static *.gif recursive-include synapse/static *.html recursive-include synapse/static *.js exclude jenkins.sh exclude jenkins*.sh exclude jenkins* recursive-exclude jenkins *.sh prune .github prune demo/etc synapse-0.24.0/MAP.rst000066400000000000000000000041121317335640100144270ustar00rootroot00000000000000Directory Structure =================== Warning: this may be a bit stale... :: . ├── cmdclient Basic CLI python Matrix client ├── demo Scripts for running standalone Matrix demos ├── docs All doc, including the draft Matrix API spec │   ├── client-server The client-server Matrix API spec │   ├── model Domain-specific elements of the Matrix API spec │   ├── server-server The server-server model of the Matrix API spec │   └── sphinx The internal API doc of the Synapse homeserver ├── experiments Early experiments of using Synapse's internal APIs ├── graph Visualisation of Matrix's distributed message store ├── synapse The reference Matrix homeserver implementation │   ├── api Common building blocks for the APIs │   │   ├── events Definition of state representation Events │   │   └── streams Definition of streamable Event objects │   ├── app The __main__ entry point for the homeserver │   ├── crypto The PKI client/server used for secure federation │   │   └── resource PKI helper objects (e.g. keys) │   ├── federation Server-server state replication logic │   ├── handlers The main business logic of the homeserver │   ├── http Wrappers around Twisted's HTTP server & client │   ├── rest Servlet-style RESTful API │   ├── storage Persistence subsystem (currently only sqlite3) │   │   └── schema sqlite persistence schema │   └── util Synapse-specific utilities ├── tests Unit tests for the Synapse homeserver └── webclient Basic AngularJS Matrix web client synapse-0.24.0/README.rst000066400000000000000000001105501317335640100147530ustar00rootroot00000000000000.. contents:: Introduction ============ Matrix is an ambitious new ecosystem for open federated Instant Messaging and VoIP. The basics you need to know to get up and running are: - Everything in Matrix happens in a room. Rooms are distributed and do not exist on any single server. Rooms can be located using convenience aliases like ``#matrix:matrix.org`` or ``#test:localhost:8448``. - Matrix user IDs look like ``@matthew:matrix.org`` (although in the future you will normally refer to yourself and others using a third party identifier (3PID): email address, phone number, etc rather than manipulating Matrix user IDs) The overall architecture is:: client <----> homeserver <=====================> homeserver <----> client https://somewhere.org/_matrix https://elsewhere.net/_matrix ``#matrix:matrix.org`` is the official support room for Matrix, and can be accessed by any client from https://matrix.org/docs/projects/try-matrix-now.html or via IRC bridge at irc://irc.freenode.net/matrix. Synapse is currently in rapid development, but as of version 0.5 we believe it is sufficiently stable to be run as an internet-facing service for real usage! About Matrix ============ Matrix specifies a set of pragmatic RESTful HTTP JSON APIs as an open standard, which handle: - Creating and managing fully distributed chat rooms with no single points of control or failure - Eventually-consistent cryptographically secure synchronisation of room state across a global open network of federated servers and services - Sending and receiving extensible messages in a room with (optional) end-to-end encryption[1] - Inviting, joining, leaving, kicking, banning room members - Managing user accounts (registration, login, logout) - Using 3rd Party IDs (3PIDs) such as email addresses, phone numbers, Facebook accounts to authenticate, identify and discover users on Matrix. - Placing 1:1 VoIP and Video calls These APIs are intended to be implemented on a wide range of servers, services and clients, letting developers build messaging and VoIP functionality on top of the entirely open Matrix ecosystem rather than using closed or proprietary solutions. The hope is for Matrix to act as the building blocks for a new generation of fully open and interoperable messaging and VoIP apps for the internet. Synapse is a reference "homeserver" implementation of Matrix from the core development team at matrix.org, written in Python/Twisted. It is intended to showcase the concept of Matrix and let folks see the spec in the context of a codebase and let you run your own homeserver and generally help bootstrap the ecosystem. In Matrix, every user runs one or more Matrix clients, which connect through to a Matrix homeserver. The homeserver stores all their personal chat history and user account information - much as a mail client connects through to an IMAP/SMTP server. Just like email, you can either run your own Matrix homeserver and control and own your own communications and history or use one hosted by someone else (e.g. matrix.org) - there is no single point of control or mandatory service provider in Matrix, unlike WhatsApp, Facebook, Hangouts, etc. We'd like to invite you to join #matrix:matrix.org (via https://matrix.org/docs/projects/try-matrix-now.html), run a homeserver, take a look at the `Matrix spec `_, and experiment with the `APIs `_ and `Client SDKs `_. Thanks for using Matrix! [1] End-to-end encryption is currently in beta: `blog post `_. Synapse Installation ==================== Synapse is the reference python/twisted Matrix homeserver implementation. System requirements: - POSIX-compliant system (tested on Linux & OS X) - Python 2.7 - At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org Installing from source ---------------------- (Prebuilt packages are available for some platforms - see `Platform-Specific Instructions`_.) Synapse is written in python but some of the libraries it uses are written in C. So before we can install synapse itself we need a working C compiler and the header files for python C extensions. Installing prerequisites on Ubuntu or Debian:: sudo apt-get install build-essential python2.7-dev libffi-dev \ python-pip python-setuptools sqlite3 \ libssl-dev python-virtualenv libjpeg-dev libxslt1-dev Installing prerequisites on ArchLinux:: sudo pacman -S base-devel python2 python-pip \ python-setuptools python-virtualenv sqlite3 Installing prerequisites on CentOS 7 or Fedora 25:: sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \ lcms2-devel libwebp-devel tcl-devel tk-devel redhat-rpm-config \ python-virtualenv libffi-devel openssl-devel sudo yum groupinstall "Development Tools" Installing prerequisites on Mac OS X:: xcode-select --install sudo easy_install pip sudo pip install virtualenv brew install pkg-config libffi Installing prerequisites on Raspbian:: sudo apt-get install build-essential python2.7-dev libffi-dev \ python-pip python-setuptools sqlite3 \ libssl-dev python-virtualenv libjpeg-dev sudo pip install --upgrade pip sudo pip install --upgrade ndg-httpsclient sudo pip install --upgrade virtualenv Installing prerequisites on openSUSE:: sudo zypper in -t pattern devel_basis sudo zypper in python-pip python-setuptools sqlite3 python-virtualenv \ python-devel libffi-devel libopenssl-devel libjpeg62-devel Installing prerequisites on OpenBSD:: doas pkg_add python libffi py-pip py-setuptools sqlite3 py-virtualenv \ libxslt To install the synapse homeserver run:: virtualenv -p python2.7 ~/.synapse source ~/.synapse/bin/activate pip install --upgrade pip pip install --upgrade setuptools pip install https://github.com/matrix-org/synapse/tarball/master This installs synapse, along with the libraries it uses, into a virtual environment under ``~/.synapse``. Feel free to pick a different directory if you prefer. In case of problems, please see the _`Troubleshooting` section below. Alternatively, Silvio Fricke has contributed a Dockerfile to automate the above in Docker at https://registry.hub.docker.com/u/silviof/docker-matrix/. Also, Martin Giess has created an auto-deployment process with vagrant/ansible, tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy for details. Configuring synapse ------------------- Before you can start Synapse, you will need to generate a configuration file. To do this, run (in your virtualenv, as before):: cd ~/.synapse python -m synapse.app.homeserver \ --server-name my.domain.name \ --config-path homeserver.yaml \ --generate-config \ --report-stats=[yes|no] ... substituting an appropriate value for ``--server-name``. The server name determines the "domain" part of user-ids for users on your server: these will all be of the format ``@user:my.domain.name``. It also determines how other matrix servers will reach yours for `Federation`_. For a test configuration, set this to the hostname of your server. For a more production-ready setup, you will probably want to specify your domain (``example.com``) rather than a matrix-specific hostname here (in the same way that your email address is probably ``user@example.com`` rather than ``user@email.example.com``) - but doing so may require more advanced setup - see `Setting up Federation`_. Beware that the server name cannot be changed later. This command will generate you a config file that you can then customise, but it will also generate a set of keys for you. These keys will allow your Home Server to identify itself to other Home Servers, so don't lose or delete them. It would be wise to back them up somewhere safe. (If, for whatever reason, you do need to change your Home Server's keys, you may find that other Home Servers have the old key cached. If you update the signing key, you should change the name of the key in the ``.signing.key`` file (the second word) to something different. See `the spec`__ for more information on key management.) .. __: `key_management`_ The default configuration exposes two HTTP ports: 8008 and 8448. Port 8008 is configured without TLS; it should be behind a reverse proxy for TLS/SSL termination on port 443 which in turn should be used for clients. Port 8448 is configured to use TLS with a self-signed certificate. If you would like to do initial test with a client without having to setup a reverse proxy, you can temporarly use another certificate. (Note that a self-signed certificate is fine for `Federation`_). You can do so by changing ``tls_certificate_path``, ``tls_private_key_path`` and ``tls_dh_params_path`` in ``homeserver.yaml``; alternatively, you can use a reverse-proxy, but be sure to read `Using a reverse proxy with Synapse`_ when doing so. Apart from port 8448 using TLS, both ports are the same in the default configuration. Registering a user ------------------ You will need at least one user on your server in order to use a Matrix client. Users can be registered either `via a Matrix client`__, or via a commandline script. .. __: `client-user-reg`_ To get started, it is easiest to use the command line to register new users:: $ source ~/.synapse/bin/activate $ synctl start # if not already running $ register_new_matrix_user -c homeserver.yaml https://localhost:8448 New user localpart: erikj Password: Confirm password: Make admin [no]: Success! This process uses a setting ``registration_shared_secret`` in ``homeserver.yaml``, which is shared between Synapse itself and the ``register_new_matrix_user`` script. It doesn't matter what it is (a random value is generated by ``--generate-config``), but it should be kept secret, as anyone with knowledge of it can register users on your server even if ``enable_registration`` is ``false``. Setting up a TURN server ------------------------ For reliable VoIP calls to be routed via this homeserver, you MUST configure a TURN server. See ``_ for details. IPv6 ---- As of Synapse 0.19 we finally support IPv6, many thanks to @kyrias and @glyph for providing PR #1696. However, for federation to work on hosts with IPv6 DNS servers you **must** be running Twisted 17.1.0 or later - see https://github.com/matrix-org/synapse/issues/1002 for details. We can't make Synapse depend on Twisted 17.1 by default yet as it will break most older distributions (see https://github.com/matrix-org/synapse/pull/1909) so if you are using operating system dependencies you'll have to install your own Twisted 17.1 package via pip or backports etc. If you're running in a virtualenv then pip should have installed the newest Twisted automatically, but if your virtualenv is old you will need to manually upgrade to a newer Twisted dependency via: pip install Twisted>=17.1.0 Running Synapse =============== To actually run your new homeserver, pick a working directory for Synapse to run (e.g. ``~/.synapse``), and:: cd ~/.synapse source ./bin/activate synctl start Connecting to Synapse from a client =================================== The easiest way to try out your new Synapse installation is by connecting to it from a web client. The easiest option is probably the one at http://riot.im/app. You will need to specify a "Custom server" when you log on or register: set this to ``https://domain.tld`` if you setup a reverse proxy following the recommended setup, or ``https://localhost:8448`` - remember to specify the port (``:8448``) if not ``:443`` unless you changed the configuration. (Leave the identity server as the default - see `Identity servers`_.) If using port 8448 you will run into errors until you accept the self-signed certificate. You can easily do this by going to ``https://localhost:8448`` directly with your browser and accept the presented certificate. You can then go back in your web client and proceed further. If all goes well you should at least be able to log in, create a room, and start sending messages. (The homeserver runs a web client by default at https://localhost:8448/, though as of the time of writing it is somewhat outdated and not really recommended - https://github.com/matrix-org/synapse/issues/1527). .. _`client-user-reg`: Registering a new user from a client ------------------------------------ By default, registration of new users via Matrix clients is disabled. To enable it, specify ``enable_registration: true`` in ``homeserver.yaml``. (It is then recommended to also set up CAPTCHA - see ``_.) Once ``enable_registration`` is set to ``true``, it is possible to register a user via `riot.im `_ or other Matrix clients. Your new user name will be formed partly from the ``server_name`` (see `Configuring synapse`_), and partly from a localpart you specify when you create the account. Your name will take the form of:: @localpart:my.domain.name (pronounced "at localpart on my dot domain dot name"). As when logging in, you will need to specify a "Custom server". Specify your desired ``localpart`` in the 'User name' box. Security Note ============= Matrix serves raw user generated data in some APIs - specifically the `content repository endpoints `_. Whilst we have tried to mitigate against possible XSS attacks (e.g. https://github.com/matrix-org/synapse/pull/1021) we recommend running matrix homeservers on a dedicated domain name, to limit any malicious user generated content served to web browsers a matrix API from being able to attack webapps hosted on the same domain. This is particularly true of sharing a matrix webclient and server on the same domain. See https://github.com/vector-im/vector-web/issues/1977 and https://developer.github.com/changes/2014-04-25-user-content-security for more details. Platform-Specific Instructions ============================== Debian ------ Matrix provides official Debian packages via apt from http://matrix.org/packages/debian/. Note that these packages do not include a client - choose one from https://matrix.org/docs/projects/try-matrix-now.html (or build your own with one of our SDKs :) Fedora ------ Oleg Girko provides Fedora RPMs at https://obs.infoserver.lv/project/monitor/matrix-synapse ArchLinux --------- The quickest way to get up and running with ArchLinux is probably with the community package https://www.archlinux.org/packages/community/any/matrix-synapse/, which should pull in most of the necessary dependencies. If the default web client is to be served (enabled by default in the generated config), https://www.archlinux.org/packages/community/any/python2-matrix-angular-sdk/ will also need to be installed. Alternatively, to install using pip a few changes may be needed as ArchLinux defaults to python 3, but synapse currently assumes python 2.7 by default: pip may be outdated (6.0.7-1 and needs to be upgraded to 6.0.8-1 ):: sudo pip2.7 install --upgrade pip You also may need to explicitly specify python 2.7 again during the install request:: pip2.7 install https://github.com/matrix-org/synapse/tarball/master If you encounter an error with lib bcrypt causing an Wrong ELF Class: ELFCLASS32 (x64 Systems), you may need to reinstall py-bcrypt to correctly compile it under the right architecture. (This should not be needed if installing under virtualenv):: sudo pip2.7 uninstall py-bcrypt sudo pip2.7 install py-bcrypt During setup of Synapse you need to call python2.7 directly again:: cd ~/.synapse python2.7 -m synapse.app.homeserver \ --server-name machine.my.domain.name \ --config-path homeserver.yaml \ --generate-config ...substituting your host and domain name as appropriate. FreeBSD ------- Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Molloy from: - Ports: ``cd /usr/ports/net-im/py-matrix-synapse && make install clean`` - Packages: ``pkg install py27-matrix-synapse`` OpenBSD ------- There is currently no port for OpenBSD. Additionally, OpenBSD's security settings require a slightly more difficult installation process. 1) Create a new directory in ``/usr/local`` called ``_synapse``. Also, create a new user called ``_synapse`` and set that directory as the new user's home. This is required because, by default, OpenBSD only allows binaries which need write and execute permissions on the same memory space to be run from ``/usr/local``. 2) ``su`` to the new ``_synapse`` user and change to their home directory. 3) Create a new virtualenv: ``virtualenv -p python2.7 ~/.synapse`` 4) Source the virtualenv configuration located at ``/usr/local/_synapse/.synapse/bin/activate``. This is done in ``ksh`` by using the ``.`` command, rather than ``bash``'s ``source``. 5) Optionally, use ``pip`` to install ``lxml``, which Synapse needs to parse webpages for their titles. 6) Use ``pip`` to install this repository: ``pip install https://github.com/matrix-org/synapse/tarball/master`` 7) Optionally, change ``_synapse``'s shell to ``/bin/false`` to reduce the chance of a compromised Synapse server being used to take over your box. After this, you may proceed with the rest of the install directions. NixOS ----- Robin Lambertz has packaged Synapse for NixOS at: https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/misc/matrix-synapse.nix Windows Install --------------- Synapse can be installed on Cygwin. It requires the following Cygwin packages: - gcc - git - libffi-devel - openssl (and openssl-devel, python-openssl) - python - python-setuptools The content repository requires additional packages and will be unable to process uploads without them: - libjpeg8 - libjpeg8-devel - zlib If you choose to install Synapse without these packages, you will need to reinstall ``pillow`` for changes to be applied, e.g. ``pip uninstall pillow`` ``pip install pillow --user`` Troubleshooting: - You may need to upgrade ``setuptools`` to get this to work correctly: ``pip install setuptools --upgrade``. - You may encounter errors indicating that ``ffi.h`` is missing, even with ``libffi-devel`` installed. If you do, copy the ``.h`` files: ``cp /usr/lib/libffi-3.0.13/include/*.h /usr/include`` - You may need to install libsodium from source in order to install PyNacl. If you do, you may need to create a symlink to ``libsodium.a`` so ``ld`` can find it: ``ln -s /usr/local/lib/libsodium.a /usr/lib/libsodium.a`` Troubleshooting =============== Troubleshooting Installation ---------------------------- Synapse requires pip 1.7 or later, so if your OS provides too old a version you may need to manually upgrade it:: sudo pip install --upgrade pip Installing may fail with ``Could not find any downloads that satisfy the requirement pymacaroons-pynacl (from matrix-synapse==0.12.0)``. You can fix this by manually upgrading pip and virtualenv:: sudo pip install --upgrade virtualenv You can next rerun ``virtualenv -p python2.7 synapse`` to update the virtual env. Installing may fail during installing virtualenv with ``InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.`` You can fix this by manually installing ndg-httpsclient:: pip install --upgrade ndg-httpsclient Installing may fail with ``mock requires setuptools>=17.1. Aborting installation``. You can fix this by upgrading setuptools:: pip install --upgrade setuptools If pip crashes mid-installation for reason (e.g. lost terminal), pip may refuse to run until you remove the temporary installation directory it created. To reset the installation:: rm -rf /tmp/pip_install_matrix pip seems to leak *lots* of memory during installation. For instance, a Linux host with 512MB of RAM may run out of memory whilst installing Twisted. If this happens, you will have to individually install the dependencies which are failing, e.g.:: pip install twisted On OS X, if you encounter clang: error: unknown argument: '-mno-fused-madd' you will need to export CFLAGS=-Qunused-arguments. Troubleshooting Running ----------------------- If synapse fails with ``missing "sodium.h"`` crypto errors, you may need to manually upgrade PyNaCL, as synapse uses NaCl (http://nacl.cr.yp.to/) for encryption and digital signatures. Unfortunately PyNACL currently has a few issues (https://github.com/pyca/pynacl/issues/53) and (https://github.com/pyca/pynacl/issues/79) that mean it may not install correctly, causing all tests to fail with errors about missing "sodium.h". To fix try re-installing from PyPI or directly from (https://github.com/pyca/pynacl):: # Install from PyPI pip install --user --upgrade --force pynacl # Install from github pip install --user https://github.com/pyca/pynacl/tarball/master Running out of File Handles ~~~~~~~~~~~~~~~~~~~~~~~~~~~ If synapse runs out of filehandles, it typically fails badly - live-locking at 100% CPU, and/or failing to accept new TCP connections (blocking the connecting client). Matrix currently can legitimately use a lot of file handles, thanks to busy rooms like #matrix:matrix.org containing hundreds of participating servers. The first time a server talks in a room it will try to connect simultaneously to all participating servers, which could exhaust the available file descriptors between DNS queries & HTTPS sockets, especially if DNS is slow to respond. (We need to improve the routing algorithm used to be better than full mesh, but as of June 2017 this hasn't happened yet). If you hit this failure mode, we recommend increasing the maximum number of open file handles to be at least 4096 (assuming a default of 1024 or 256). This is typically done by editing ``/etc/security/limits.conf`` Separately, Synapse may leak file handles if inbound HTTP requests get stuck during processing - e.g. blocked behind a lock or talking to a remote server etc. This is best diagnosed by matching up the 'Received request' and 'Processed request' log lines and looking for any 'Processed request' lines which take more than a few seconds to execute. Please let us know at #matrix-dev:matrix.org if you see this failure mode so we can help debug it, however. ArchLinux ~~~~~~~~~ If running `$ synctl start` fails with 'returned non-zero exit status 1', you will need to explicitly call Python2.7 - either running as:: python2.7 -m synapse.app.homeserver --daemonize -c homeserver.yaml ...or by editing synctl with the correct python executable. Upgrading an existing Synapse ============================= The instructions for upgrading synapse are in `UPGRADE.rst`_. Please check these instructions as upgrading may require extra steps for some versions of synapse. .. _UPGRADE.rst: UPGRADE.rst .. _federation: Setting up Federation ===================== Federation is the process by which users on different servers can participate in the same room. For this to work, those other servers must be able to contact yours to send messages. As explained in `Configuring synapse`_, the ``server_name`` in your ``homeserver.yaml`` file determines the way that other servers will reach yours. By default, they will treat it as a hostname and try to connect to port 8448. This is easy to set up and will work with the default configuration, provided you set the ``server_name`` to match your machine's public DNS hostname. For a more flexible configuration, you can set up a DNS SRV record. This allows you to run your server on a machine that might not have the same name as your domain name. For example, you might want to run your server at ``synapse.example.com``, but have your Matrix user-ids look like ``@user:example.com``. (A SRV record also allows you to change the port from the default 8448. However, if you are thinking of using a reverse-proxy on the federation port, which is not recommended, be sure to read `Reverse-proxying the federation port`_ first.) To use a SRV record, first create your SRV record and publish it in DNS. This should have the format ``_matrix._tcp. IN SRV 10 0 ``. The DNS record should then look something like:: $ dig -t srv _matrix._tcp.example.com _matrix._tcp.example.com. 3600 IN SRV 10 0 8448 synapse.example.com. You can then configure your homeserver to use ```` as the domain in its user-ids, by setting ``server_name``:: python -m synapse.app.homeserver \ --server-name \ --config-path homeserver.yaml \ --generate-config python -m synapse.app.homeserver --config-path homeserver.yaml If you've already generated the config file, you need to edit the ``server_name`` in your ``homeserver.yaml`` file. If you've already started Synapse and a database has been created, you will have to recreate the database. If all goes well, you should be able to `connect to your server with a client`__, and then join a room via federation. (Try ``#matrix-dev:matrix.org`` as a first step. "Matrix HQ"'s sheer size and activity level tends to make even the largest boxes pause for thought.) .. __: `Connecting to Synapse from a client`_ Troubleshooting --------------- The typical failure mode with federation is that when you try to join a room, it is rejected with "401: Unauthorized". Generally this means that other servers in the room couldn't access yours. (Joining a room over federation is a complicated dance which requires connections in both directions). So, things to check are: * If you are trying to use a reverse-proxy, read `Reverse-proxying the federation port`_. * If you are not using a SRV record, check that your ``server_name`` (the part of your user-id after the ``:``) matches your hostname, and that port 8448 on that hostname is reachable from outside your network. * If you *are* using a SRV record, check that it matches your ``server_name`` (it should be ``_matrix._tcp.``), and that the port and hostname it specifies are reachable from outside your network. Running a Demo Federation of Synapses ------------------------------------- If you want to get up and running quickly with a trio of homeservers in a private federation, there is a script in the ``demo`` directory. This is mainly useful just for development purposes. See ``_. Using PostgreSQL ================ As of Synapse 0.9, `PostgreSQL `_ is supported as an alternative to the `SQLite `_ database that Synapse has traditionally used for convenience and simplicity. The advantages of Postgres include: * significant performance improvements due to the superior threading and caching model, smarter query optimiser * allowing the DB to be run on separate hardware * allowing basic active/backup high-availability with a "hot spare" synapse pointing at the same DB master, as well as enabling DB replication in synapse itself. For information on how to install and use PostgreSQL, please see `docs/postgres.rst `_. .. _reverse-proxy: Using a reverse proxy with Synapse ================================== It is recommended to put a reverse proxy such as `nginx `_, `Apache `_ or `HAProxy `_ in front of Synapse. One advantage of doing so is that it means that you can expose the default https port (443) to Matrix clients without needing to run Synapse with root privileges. The most important thing to know here is that Matrix clients and other Matrix servers do not necessarily need to connect to your server via the same port. Indeed, clients will use port 443 by default, whereas servers default to port 8448. Where these are different, we refer to the 'client port' and the 'federation port'. The next most important thing to know is that using a reverse-proxy on the federation port has a number of pitfalls. It is possible, but be sure to read `Reverse-proxying the federation port`_. The recommended setup is therefore to configure your reverse-proxy on port 443 to port 8008 of synapse for client connections, but to also directly expose port 8448 for server-server connections. All the Matrix endpoints begin ``/_matrix``, so an example nginx configuration might look like:: server { listen 443 ssl; listen [::]:443 ssl; server_name matrix.example.com; location /_matrix { proxy_pass http://localhost:8008; proxy_set_header X-Forwarded-For $remote_addr; } } You will also want to set ``bind_addresses: ['127.0.0.1']`` and ``x_forwarded: true`` for port 8008 in ``homeserver.yaml`` to ensure that client IP addresses are recorded correctly. Having done so, you can then use ``https://matrix.example.com`` (instead of ``https://matrix.example.com:8448``) as the "Custom server" when `Connecting to Synapse from a client`_. Reverse-proxying the federation port ------------------------------------ There are two issues to consider before using a reverse-proxy on the federation port: * Due to the way SSL certificates are managed in the Matrix federation protocol (see `spec`__), Synapse needs to be configured with the path to the SSL certificate, *even if you do not terminate SSL at Synapse*. .. __: `key_management`_ * Synapse does not currently support SNI on the federation protocol (`bug #1491 `_), which means that using name-based virtual hosting is unreliable. Furthermore, a number of the normal reasons for using a reverse-proxy do not apply: * Other servers will connect on port 8448 by default, so there is no need to listen on port 443 (for federation, at least), which avoids the need for root privileges and virtual hosting. * A self-signed SSL certificate is fine for federation, so there is no need to automate renewals. (The certificate generated by ``--generate-config`` is valid for 10 years.) If you want to set up a reverse-proxy on the federation port despite these caveats, you will need to do the following: * In ``homeserver.yaml``, set ``tls_certificate_path`` to the path to the SSL certificate file used by your reverse-proxy, and set ``no_tls`` to ``True``. (``tls_private_key_path`` will be ignored if ``no_tls`` is ``True``.) * In your reverse-proxy configuration: * If there are other virtual hosts on the same port, make sure that the *default* one uses the certificate configured above. * Forward ``/_matrix`` to Synapse. * If your reverse-proxy is not listening on port 8448, publish a SRV record to tell other servers how to find you. See `Setting up Federation`_. When updating the SSL certificate, just update the file pointed to by ``tls_certificate_path``: there is no need to restart synapse. (You may like to use a symbolic link to help make this process atomic.) The most common mistake when setting up federation is not to tell Synapse about your SSL certificate. To check it, you can visit ``https://matrix.org/federationtester/api/report?server_name=``. Unfortunately, there is no UI for this yet, but, you should see ``"MatchingTLSFingerprint": true``. If not, check that ``Certificates[0].SHA256Fingerprint`` (the fingerprint of the certificate presented by your reverse-proxy) matches ``Keys.tls_fingerprints[0].sha256`` (the fingerprint of the certificate Synapse is using). Identity Servers ================ Identity servers have the job of mapping email addresses and other 3rd Party IDs (3PIDs) to Matrix user IDs, as well as verifying the ownership of 3PIDs before creating that mapping. **They are not where accounts or credentials are stored - these live on home servers. Identity Servers are just for mapping 3rd party IDs to matrix IDs.** This process is very security-sensitive, as there is obvious risk of spam if it is too easy to sign up for Matrix accounts or harvest 3PID data. In the longer term, we hope to create a decentralised system to manage it (`matrix-doc #712 `_), but in the meantime, the role of managing trusted identity in the Matrix ecosystem is farmed out to a cluster of known trusted ecosystem partners, who run 'Matrix Identity Servers' such as `Sydent `_, whose role is purely to authenticate and track 3PID logins and publish end-user public keys. You can host your own copy of Sydent, but this will prevent you reaching other users in the Matrix ecosystem via their email address, and prevent them finding you. We therefore recommend that you use one of the centralised identity servers at ``https://matrix.org`` or ``https://vector.im`` for now. To reiterate: the Identity server will only be used if you choose to associate an email address with your account, or send an invite to another user via their email address. URL Previews ============ Synapse 0.15.0 introduces a new API for previewing URLs at ``/_matrix/media/r0/preview_url``. This is disabled by default. To turn it on you must enable the ``url_preview_enabled: True`` config parameter and explicitly specify the IP ranges that Synapse is not allowed to spider for previewing in the ``url_preview_ip_range_blacklist`` configuration parameter. This is critical from a security perspective to stop arbitrary Matrix users spidering 'internal' URLs on your network. At the very least we recommend that your loopback and RFC1918 IP addresses are blacklisted. This also requires the optional lxml and netaddr python dependencies to be installed. Password reset ============== If a user has registered an email address to their account using an identity server, they can request a password-reset token via clients such as Vector. A manual password reset can be done via direct database access as follows. First calculate the hash of the new password:: $ source ~/.synapse/bin/activate $ ./scripts/hash_password Password: Confirm password: $2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Then update the `users` table in the database:: UPDATE users SET password_hash='$2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' WHERE name='@test:test.com'; Synapse Development =================== Before setting up a development environment for synapse, make sure you have the system dependencies (such as the python header files) installed - see `Installing from source`_. To check out a synapse for development, clone the git repo into a working directory of your choice:: git clone https://github.com/matrix-org/synapse.git cd synapse Synapse has a number of external dependencies, that are easiest to install using pip and a virtualenv:: virtualenv -p python2.7 env source env/bin/activate python synapse/python_dependencies.py | xargs pip install pip install lxml mock This will run a process of downloading and installing all the needed dependencies into a virtual env. Once this is done, you may wish to run Synapse's unit tests, to check that everything is installed as it should be:: PYTHONPATH="." trial tests This should end with a 'PASSED' result:: Ran 143 tests in 0.601s PASSED (successes=143) Building Internal API Documentation =================================== Before building internal API documentation install sphinx and sphinxcontrib-napoleon:: pip install sphinx pip install sphinxcontrib-napoleon Building internal API documentation:: python setup.py build_sphinx Help!! Synapse eats all my RAM! =============================== Synapse's architecture is quite RAM hungry currently - we deliberately cache a lot of recent room data and metadata in RAM in order to speed up common requests. We'll improve this in future, but for now the easiest way to either reduce the RAM usage (at the risk of slowing things down) is to set the almost-undocumented ``SYNAPSE_CACHE_FACTOR`` environment variable. The default is 0.5, which can be decreased to reduce RAM usage in memory constrained enviroments, or increased if performance starts to degrade. .. _`key_management`: https://matrix.org/docs/spec/server_server/unstable.html#retrieving-server-keys synapse-0.24.0/UPGRADE.rst000066400000000000000000000235531317335640100151130ustar00rootroot00000000000000Upgrading Synapse ================= Before upgrading check if any special steps are required to upgrade from the what you currently have installed to current version of synapse. The extra instructions that may be required are listed later in this document. 1. If synapse was installed in a virtualenv then active that virtualenv before upgrading. If synapse is installed in a virtualenv in ``~/.synapse/`` then run: .. code:: bash source ~/.synapse/bin/activate 2. If synapse was installed using pip then upgrade to the latest version by running: .. code:: bash pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master # restart synapse synctl restart If synapse was installed using git then upgrade to the latest version by running: .. code:: bash # Pull the latest version of the master branch. git pull # Update the versions of synapse's python dependencies. python synapse/python_dependencies.py | xargs pip install --upgrade # restart synapse ./synctl restart To check whether your update was sucessful, you can check the Server header returned by the Client-Server API: .. code:: bash # replace with the hostname of your synapse homeserver. # You may need to specify a port (eg, :8448) if your server is not # configured on port 443. curl -kv https:///_matrix/client/versions 2>&1 | grep "Server:" Upgrading to v0.15.0 ==================== If you want to use the new URL previewing API (/_matrix/media/r0/preview_url) then you have to explicitly enable it in the config and update your dependencies dependencies. See README.rst for details. Upgrading to v0.11.0 ==================== This release includes the option to send anonymous usage stats to matrix.org, and requires that administrators explictly opt in or out by setting the ``report_stats`` option to either ``true`` or ``false``. We would really appreciate it if you could help our project out by reporting anonymized usage statistics from your homeserver. Only very basic aggregate data (e.g. number of users) will be reported, but it helps us to track the growth of the Matrix community, and helps us to make Matrix a success, as well as to convince other networks that they should peer with us. Upgrading to v0.9.0 =================== Application services have had a breaking API change in this version. They can no longer register themselves with a home server using the AS HTTP API. This decision was made because a compromised application service with free reign to register any regex in effect grants full read/write access to the home server if a regex of ``.*`` is used. An attack where a compromised AS re-registers itself with ``.*`` was deemed too big of a security risk to ignore, and so the ability to register with the HS remotely has been removed. It has been replaced by specifying a list of application service registrations in ``homeserver.yaml``:: app_service_config_files: ["registration-01.yaml", "registration-02.yaml"] Where ``registration-01.yaml`` looks like:: url: # e.g. "https://my.application.service.com" as_token: hs_token: sender_localpart: # This is a new field which denotes the user_id localpart when using the AS token namespaces: users: - exclusive: regex: # e.g. "@prefix_.*" aliases: - exclusive: regex: rooms: - exclusive: regex: Upgrading to v0.8.0 =================== Servers which use captchas will need to add their public key to:: static/client/register/register_config.js window.matrixRegistrationConfig = { recaptcha_public_key: "YOUR_PUBLIC_KEY" }; This is required in order to support registration fallback (typically used on mobile devices). Upgrading to v0.7.0 =================== New dependencies are: - pydenticon - simplejson - syutil - matrix-angular-sdk To pull in these dependencies in a virtual env, run:: python synapse/python_dependencies.py | xargs -n 1 pip install Upgrading to v0.6.0 =================== To pull in new dependencies, run:: python setup.py develop --user This update includes a change to the database schema. To upgrade you first need to upgrade the database by running:: python scripts/upgrade_db_to_v0.6.0.py Where `` is the location of the database, `` is the server name as specified in the synapse configuration, and `` is the location of the signing key as specified in the synapse configuration. This may take some time to complete. Failures of signatures and content hashes can safely be ignored. Upgrading to v0.5.1 =================== Depending on precisely when you installed v0.5.0 you may have ended up with a stale release of the reference matrix webclient installed as a python module. To uninstall it and ensure you are depending on the latest module, please run:: $ pip uninstall syweb Upgrading to v0.5.0 =================== The webclient has been split out into a seperate repository/pacakage in this release. Before you restart your homeserver you will need to pull in the webclient package by running:: python setup.py develop --user This release completely changes the database schema and so requires upgrading it before starting the new version of the homeserver. The script "database-prepare-for-0.5.0.sh" should be used to upgrade the database. This will save all user information, such as logins and profiles, but will otherwise purge the database. This includes messages, which rooms the home server was a member of and room alias mappings. If you would like to keep your history, please take a copy of your database file and ask for help in #matrix:matrix.org. The upgrade process is, unfortunately, non trivial and requires human intervention to resolve any resulting conflicts during the upgrade process. Before running the command the homeserver should be first completely shutdown. To run it, simply specify the location of the database, e.g.: ./scripts/database-prepare-for-0.5.0.sh "homeserver.db" Once this has successfully completed it will be safe to restart the homeserver. You may notice that the homeserver takes a few seconds longer to restart than usual as it reinitializes the database. On startup of the new version, users can either rejoin remote rooms using room aliases or by being reinvited. Alternatively, if any other homeserver sends a message to a room that the homeserver was previously in the local HS will automatically rejoin the room. Upgrading to v0.4.0 =================== This release needs an updated syutil version. Run:: python setup.py develop You will also need to upgrade your configuration as the signing key format has changed. Run:: python -m synapse.app.homeserver --config-path --generate-config Upgrading to v0.3.0 =================== This registration API now closely matches the login API. This introduces a bit more backwards and forwards between the HS and the client, but this improves the overall flexibility of the API. You can now GET on /register to retrieve a list of valid registration flows. Upon choosing one, they are submitted in the same way as login, e.g:: { type: m.login.password, user: foo, password: bar } The default HS supports 2 flows, with and without Identity Server email authentication. Enabling captcha on the HS will add in an extra step to all flows: ``m.login.recaptcha`` which must be completed before you can transition to the next stage. There is a new login type: ``m.login.email.identity`` which contains the ``threepidCreds`` key which were previously sent in the original register request. For more information on this, see the specification. Web Client ---------- The VoIP specification has changed between v0.2.0 and v0.3.0. Users should refresh any browser tabs to get the latest web client code. Users on v0.2.0 of the web client will not be able to call those on v0.3.0 and vice versa. Upgrading to v0.2.0 =================== The home server now requires setting up of SSL config before it can run. To automatically generate default config use:: $ python synapse/app/homeserver.py \ --server-name machine.my.domain.name \ --bind-port 8448 \ --config-path homeserver.config \ --generate-config This config can be edited if desired, for example to specify a different SSL certificate to use. Once done you can run the home server using:: $ python synapse/app/homeserver.py --config-path homeserver.config See the README.rst for more information. Also note that some config options have been renamed, including: - "host" to "server-name" - "database" to "database-path" - "port" to "bind-port" and "unsecure-port" Upgrading to v0.0.1 =================== This release completely changes the database schema and so requires upgrading it before starting the new version of the homeserver. The script "database-prepare-for-0.0.1.sh" should be used to upgrade the database. This will save all user information, such as logins and profiles, but will otherwise purge the database. This includes messages, which rooms the home server was a member of and room alias mappings. Before running the command the homeserver should be first completely shutdown. To run it, simply specify the location of the database, e.g.: ./scripts/database-prepare-for-0.0.1.sh "homeserver.db" Once this has successfully completed it will be safe to restart the homeserver. You may notice that the homeserver takes a few seconds longer to restart than usual as it reinitializes the database. On startup of the new version, users can either rejoin remote rooms using room aliases or by being reinvited. Alternatively, if any other homeserver sends a message to a room that the homeserver was previously in the local HS will automatically rejoin the room. synapse-0.24.0/contrib/000077500000000000000000000000001317335640100147225ustar00rootroot00000000000000synapse-0.24.0/contrib/cmdclient/000077500000000000000000000000001317335640100166645ustar00rootroot00000000000000synapse-0.24.0/contrib/cmdclient/console.py000077500000000000000000000704631317335640100207150ustar00rootroot00000000000000#!/usr/bin/env python # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Starts a synapse client console. """ from twisted.internet import reactor, defer, threads from http import TwistedHttpClient import argparse import cmd import getpass import json import shlex import sys import time import urllib import urlparse import nacl.signing import nacl.encoding from signedjson.sign import verify_signed_json, SignatureVerifyException CONFIG_JSON = "cmdclient_config.json" TRUSTED_ID_SERVERS = [ 'localhost:8001' ] class SynapseCmd(cmd.Cmd): """Basic synapse command-line processor. This processes commands from the user and calls the relevant HTTP methods. """ def __init__(self, http_client, server_url, identity_server_url, username, token): cmd.Cmd.__init__(self) self.http_client = http_client self.http_client.verbose = True self.config = { "url": server_url, "identityServerUrl": identity_server_url, "user": username, "token": token, "verbose": "on", "complete_usernames": "on", "send_delivery_receipts": "on" } self.path_prefix = "/_matrix/client/api/v1" self.event_stream_token = "END" self.prompt = ">>> " def do_EOF(self, line): # allows CTRL+D quitting return True def emptyline(self): pass # else it repeats the previous command def _usr(self): return self.config["user"] def _tok(self): return self.config["token"] def _url(self): return self.config["url"] + self.path_prefix def _identityServerUrl(self): return self.config["identityServerUrl"] def _is_on(self, config_name): if config_name in self.config: return self.config[config_name] == "on" return False def _domain(self): if "user" not in self.config or not self.config["user"]: return None return self.config["user"].split(":")[1] def do_config(self, line): """ Show the config for this client: "config" Edit a key value mapping: "config key value" e.g. "config token 1234" Config variables: user: The username to auth with. token: The access token to auth with. url: The url of the server. verbose: [on|off] The verbosity of requests/responses. complete_usernames: [on|off] Auto complete partial usernames by assuming they are on the same homeserver as you. E.g. name >> @name:yourhost send_delivery_receipts: [on|off] Automatically send receipts to messages when performing a 'stream' command. Additional key/values can be added and can be substituted into requests by using $. E.g. 'config roomid room1' then 'raw get /rooms/$roomid'. """ if len(line) == 0: print json.dumps(self.config, indent=4) return try: args = self._parse(line, ["key", "val"], force_keys=True) # make sure restricted config values are checked config_rules = [ # key, valid_values ("verbose", ["on", "off"]), ("complete_usernames", ["on", "off"]), ("send_delivery_receipts", ["on", "off"]) ] for key, valid_vals in config_rules: if key == args["key"] and args["val"] not in valid_vals: print "%s value must be one of %s" % (args["key"], valid_vals) return # toggle the http client verbosity if args["key"] == "verbose": self.http_client.verbose = "on" == args["val"] # assign the new config self.config[args["key"]] = args["val"] print json.dumps(self.config, indent=4) save_config(self.config) except Exception as e: print e def do_register(self, line): """Registers for a new account: "register " : The desired user ID : Do not automatically clobber config values. """ args = self._parse(line, ["userid", "noupdate"]) password = None pwd = None pwd2 = "_" while pwd != pwd2: pwd = getpass.getpass("Type a password for this user: ") pwd2 = getpass.getpass("Retype the password: ") if pwd != pwd2 or len(pwd) == 0: print "Password mismatch." pwd = None else: password = pwd body = { "type": "m.login.password" } if "userid" in args: body["user"] = args["userid"] if password: body["password"] = password reactor.callFromThread(self._do_register, body, "noupdate" not in args) @defer.inlineCallbacks def _do_register(self, data, update_config): # check the registration flows url = self._url() + "/register" json_res = yield self.http_client.do_request("GET", url) print json.dumps(json_res, indent=4) passwordFlow = None for flow in json_res["flows"]: if flow["type"] == "m.login.recaptcha" or ("stages" in flow and "m.login.recaptcha" in flow["stages"]): print "Unable to register: Home server requires captcha." return if flow["type"] == "m.login.password" and "stages" not in flow: passwordFlow = flow break if not passwordFlow: return json_res = yield self.http_client.do_request("POST", url, data=data) print json.dumps(json_res, indent=4) if update_config and "user_id" in json_res: self.config["user"] = json_res["user_id"] self.config["token"] = json_res["access_token"] save_config(self.config) def do_login(self, line): """Login as a specific user: "login @bob:localhost" You MAY be prompted for a password, or instructed to visit a URL. """ try: args = self._parse(line, ["user_id"], force_keys=True) can_login = threads.blockingCallFromThread( reactor, self._check_can_login) if can_login: p = getpass.getpass("Enter your password: ") user = args["user_id"] if self._is_on("complete_usernames") and not user.startswith("@"): domain = self._domain() if domain: user = "@" + user + ":" + domain reactor.callFromThread(self._do_login, user, p) #print " got %s " % p except Exception as e: print e @defer.inlineCallbacks def _do_login(self, user, password): path = "/login" data = { "user": user, "password": password, "type": "m.login.password" } url = self._url() + path json_res = yield self.http_client.do_request("POST", url, data=data) print json_res if "access_token" in json_res: self.config["user"] = user self.config["token"] = json_res["access_token"] save_config(self.config) print "Login successful." @defer.inlineCallbacks def _check_can_login(self): path = "/login" # ALWAYS check that the home server can handle the login request before # submitting! url = self._url() + path json_res = yield self.http_client.do_request("GET", url) print json_res if "flows" not in json_res: print "Failed to find any login flows." defer.returnValue(False) flow = json_res["flows"][0] # assume first is the one we want. if ("type" not in flow or "m.login.password" != flow["type"] or "stages" in flow): fallback_url = self._url() + "/login/fallback" print ("Unable to login via the command line client. Please visit " "%s to login." % fallback_url) defer.returnValue(False) defer.returnValue(True) def do_emailrequest(self, line): """Requests the association of a third party identifier
The email address) A string of characters generated when requesting an email that you'll supply in subsequent calls to identify yourself The number of times the user has requested an email. Leave this the same between requests to retry the request at the transport level. Increment it to request that the email be sent again. """ args = self._parse(line, ['address', 'clientSecret', 'sendAttempt']) postArgs = {'email': args['address'], 'clientSecret': args['clientSecret'], 'sendAttempt': args['sendAttempt']} reactor.callFromThread(self._do_emailrequest, postArgs) @defer.inlineCallbacks def _do_emailrequest(self, args): url = self._identityServerUrl()+"/_matrix/identity/api/v1/validate/email/requestToken" json_res = yield self.http_client.do_request("POST", url, data=urllib.urlencode(args), jsonreq=False, headers={'Content-Type': ['application/x-www-form-urlencoded']}) print json_res if 'sid' in json_res: print "Token sent. Your session ID is %s" % (json_res['sid']) def do_emailvalidate(self, line): """Validate and associate a third party ID The session ID (sid) given to you in the response to requestToken The token sent to your third party identifier address The same clientSecret you supplied in requestToken """ args = self._parse(line, ['sid', 'token', 'clientSecret']) postArgs = { 'sid' : args['sid'], 'token' : args['token'], 'clientSecret': args['clientSecret'] } reactor.callFromThread(self._do_emailvalidate, postArgs) @defer.inlineCallbacks def _do_emailvalidate(self, args): url = self._identityServerUrl()+"/_matrix/identity/api/v1/validate/email/submitToken" json_res = yield self.http_client.do_request("POST", url, data=urllib.urlencode(args), jsonreq=False, headers={'Content-Type': ['application/x-www-form-urlencoded']}) print json_res def do_3pidbind(self, line): """Validate and associate a third party ID The session ID (sid) given to you in the response to requestToken The same clientSecret you supplied in requestToken """ args = self._parse(line, ['sid', 'clientSecret']) postArgs = { 'sid' : args['sid'], 'clientSecret': args['clientSecret'] } postArgs['mxid'] = self.config["user"] reactor.callFromThread(self._do_3pidbind, postArgs) @defer.inlineCallbacks def _do_3pidbind(self, args): url = self._identityServerUrl()+"/_matrix/identity/api/v1/3pid/bind" json_res = yield self.http_client.do_request("POST", url, data=urllib.urlencode(args), jsonreq=False, headers={'Content-Type': ['application/x-www-form-urlencoded']}) print json_res def do_join(self, line): """Joins a room: "join " """ try: args = self._parse(line, ["roomid"], force_keys=True) self._do_membership_change(args["roomid"], "join", self._usr()) except Exception as e: print e def do_joinalias(self, line): try: args = self._parse(line, ["roomname"], force_keys=True) path = "/join/%s" % urllib.quote(args["roomname"]) reactor.callFromThread(self._run_and_pprint, "POST", path, {}) except Exception as e: print e def do_topic(self, line): """"topic [set|get] []" Set the topic for a room: topic set Get the topic for a room: topic get """ try: args = self._parse(line, ["action", "roomid", "topic"]) if "action" not in args or "roomid" not in args: print "Must specify set|get and a room ID." return if args["action"].lower() not in ["set", "get"]: print "Must specify set|get, not %s" % args["action"] return path = "/rooms/%s/topic" % urllib.quote(args["roomid"]) if args["action"].lower() == "set": if "topic" not in args: print "Must specify a new topic." return body = { "topic": args["topic"] } reactor.callFromThread(self._run_and_pprint, "PUT", path, body) elif args["action"].lower() == "get": reactor.callFromThread(self._run_and_pprint, "GET", path) except Exception as e: print e def do_invite(self, line): """Invite a user to a room: "invite " """ try: args = self._parse(line, ["userid", "roomid"], force_keys=True) user_id = args["userid"] reactor.callFromThread(self._do_invite, args["roomid"], user_id) except Exception as e: print e @defer.inlineCallbacks def _do_invite(self, roomid, userstring): if (not userstring.startswith('@') and self._is_on("complete_usernames")): url = self._identityServerUrl()+"/_matrix/identity/api/v1/lookup" json_res = yield self.http_client.do_request("GET", url, qparams={'medium':'email','address':userstring}) mxid = None if 'mxid' in json_res and 'signatures' in json_res: url = self._identityServerUrl()+"/_matrix/identity/api/v1/pubkey/ed25519" pubKey = None pubKeyObj = yield self.http_client.do_request("GET", url) if 'public_key' in pubKeyObj: pubKey = nacl.signing.VerifyKey(pubKeyObj['public_key'], encoder=nacl.encoding.HexEncoder) else: print "No public key found in pubkey response!" sigValid = False if pubKey: for signame in json_res['signatures']: if signame not in TRUSTED_ID_SERVERS: print "Ignoring signature from untrusted server %s" % (signame) else: try: verify_signed_json(json_res, signame, pubKey) sigValid = True print "Mapping %s -> %s correctly signed by %s" % (userstring, json_res['mxid'], signame) break except SignatureVerifyException as e: print "Invalid signature from %s" % (signame) print e if sigValid: print "Resolved 3pid %s to %s" % (userstring, json_res['mxid']) mxid = json_res['mxid'] else: print "Got association for %s but couldn't verify signature" % (userstring) if not mxid: mxid = "@" + userstring + ":" + self._domain() self._do_membership_change(roomid, "invite", mxid) def do_leave(self, line): """Leaves a room: "leave " """ try: args = self._parse(line, ["roomid"], force_keys=True) self._do_membership_change(args["roomid"], "leave", self._usr()) except Exception as e: print e def do_send(self, line): """Sends a message. "send " """ args = self._parse(line, ["roomid", "body"]) txn_id = "txn%s" % int(time.time()) path = "/rooms/%s/send/m.room.message/%s" % (urllib.quote(args["roomid"]), txn_id) body_json = { "msgtype": "m.text", "body": args["body"] } reactor.callFromThread(self._run_and_pprint, "PUT", path, body_json) def do_list(self, line): """List data about a room. "list members [query]" - List all the members in this room. "list messages [query]" - List all the messages in this room. Where [query] will be directly applied as query parameters, allowing you to use the pagination API. E.g. the last 3 messages in this room: "list messages from=END&to=START&limit=3" """ args = self._parse(line, ["type", "roomid", "qp"]) if not "type" in args or not "roomid" in args: print "Must specify type and room ID." return if args["type"] not in ["members", "messages"]: print "Unrecognised type: %s" % args["type"] return room_id = args["roomid"] path = "/rooms/%s/%s" % (urllib.quote(room_id), args["type"]) qp = {"access_token": self._tok()} if "qp" in args: for key_value_str in args["qp"].split("&"): try: key_value = key_value_str.split("=") qp[key_value[0]] = key_value[1] except: print "Bad query param: %s" % key_value return reactor.callFromThread(self._run_and_pprint, "GET", path, query_params=qp) def do_create(self, line): """Creates a room. "create [public|private] " - Create a room with the specified visibility. "create " - Create a room with default visibility. "create [public|private]" - Create a room with specified visibility. "create" - Create a room with default visibility. """ args = self._parse(line, ["vis", "roomname"]) # fixup args depending on which were set body = {} if "vis" in args and args["vis"] in ["public", "private"]: body["visibility"] = args["vis"] if "roomname" in args: room_name = args["roomname"] body["room_alias_name"] = room_name elif "vis" in args and args["vis"] not in ["public", "private"]: room_name = args["vis"] body["room_alias_name"] = room_name reactor.callFromThread(self._run_and_pprint, "POST", "/createRoom", body) def do_raw(self, line): """Directly send a JSON object: "raw " : Required. One of "PUT", "GET", "POST", "xPUT", "xGET", "xPOST". Methods with 'x' prefixed will not automatically append the access token. : Required. E.g. "/events" : Optional. E.g. "{ "msgtype":"custom.text", "body":"abc123"}" """ args = self._parse(line, ["method", "path", "data"]) # sanity check if "method" not in args or "path" not in args: print "Must specify path and method." return args["method"] = args["method"].upper() valid_methods = ["PUT", "GET", "POST", "DELETE", "XPUT", "XGET", "XPOST", "XDELETE"] if args["method"] not in valid_methods: print "Unsupported method: %s" % args["method"] return if "data" not in args: args["data"] = None else: try: args["data"] = json.loads(args["data"]) except Exception as e: print "Data is not valid JSON. %s" % e return qp = {"access_token": self._tok()} if args["method"].startswith("X"): qp = {} # remove access token args["method"] = args["method"][1:] # snip the X else: # append any query params the user has set try: parsed_url = urlparse.urlparse(args["path"]) qp.update(urlparse.parse_qs(parsed_url.query)) args["path"] = parsed_url.path except: pass reactor.callFromThread(self._run_and_pprint, args["method"], args["path"], args["data"], query_params=qp) def do_stream(self, line): """Stream data from the server: "stream " """ args = self._parse(line, ["timeout"]) timeout = 5000 if "timeout" in args: try: timeout = int(args["timeout"]) except ValueError: print "Timeout must be in milliseconds." return reactor.callFromThread(self._do_event_stream, timeout) @defer.inlineCallbacks def _do_event_stream(self, timeout): res = yield self.http_client.get_json( self._url() + "/events", { "access_token": self._tok(), "timeout": str(timeout), "from": self.event_stream_token }) print json.dumps(res, indent=4) if "chunk" in res: for event in res["chunk"]: if (event["type"] == "m.room.message" and self._is_on("send_delivery_receipts") and event["user_id"] != self._usr()): # not sent by us self._send_receipt(event, "d") # update the position in the stram if "end" in res: self.event_stream_token = res["end"] def _send_receipt(self, event, feedback_type): path = ("/rooms/%s/messages/%s/%s/feedback/%s/%s" % (urllib.quote(event["room_id"]), event["user_id"], event["msg_id"], self._usr(), feedback_type)) data = {} reactor.callFromThread(self._run_and_pprint, "PUT", path, data=data, alt_text="Sent receipt for %s" % event["msg_id"]) def _do_membership_change(self, roomid, membership, userid): path = "/rooms/%s/state/m.room.member/%s" % (urllib.quote(roomid), urllib.quote(userid)) data = { "membership": membership } reactor.callFromThread(self._run_and_pprint, "PUT", path, data=data) def do_displayname(self, line): """Get or set my displayname: "displayname [new_name]" """ args = self._parse(line, ["name"]) path = "/profile/%s/displayname" % (self.config["user"]) if "name" in args: data = {"displayname": args["name"]} reactor.callFromThread(self._run_and_pprint, "PUT", path, data=data) else: reactor.callFromThread(self._run_and_pprint, "GET", path) def _do_presence_state(self, state, line): args = self._parse(line, ["msgstring"]) path = "/presence/%s/status" % (self.config["user"]) data = {"state": state} if "msgstring" in args: data["status_msg"] = args["msgstring"] reactor.callFromThread(self._run_and_pprint, "PUT", path, data=data) def do_offline(self, line): """Set my presence state to OFFLINE""" self._do_presence_state(0, line) def do_away(self, line): """Set my presence state to AWAY""" self._do_presence_state(1, line) def do_online(self, line): """Set my presence state to ONLINE""" self._do_presence_state(2, line) def _parse(self, line, keys, force_keys=False): """ Parses the given line. Args: line : The line to parse keys : A list of keys to map onto the args force_keys : True to enforce that the line has a value for every key Returns: A dict of key:arg """ line_args = shlex.split(line) if force_keys and len(line_args) != len(keys): raise IndexError("Must specify all args: %s" % keys) # do $ substitutions for i, arg in enumerate(line_args): for config_key in self.config: if ("$" + config_key) in arg: arg = arg.replace("$" + config_key, self.config[config_key]) line_args[i] = arg return dict(zip(keys, line_args)) @defer.inlineCallbacks def _run_and_pprint(self, method, path, data=None, query_params={"access_token": None}, alt_text=None): """ Runs an HTTP request and pretty prints the output. Args: method: HTTP method path: Relative path data: Raw JSON data if any query_params: dict of query parameters to add to the url """ url = self._url() + path if "access_token" in query_params: query_params["access_token"] = self._tok() json_res = yield self.http_client.do_request(method, url, data=data, qparams=query_params) if alt_text: print alt_text else: print json.dumps(json_res, indent=4) def save_config(config): with open(CONFIG_JSON, 'w') as out: json.dump(config, out) def main(server_url, identity_server_url, username, token, config_path): print "Synapse command line client" print "===========================" print "Server: %s" % server_url print "Type 'help' to get started." print "Close this console with CTRL+C then CTRL+D." if not username or not token: print "- 'register ' - Register an account" print "- 'stream' - Connect to the event stream" print "- 'create ' - Create a room" print "- 'send ' - Send a message" http_client = TwistedHttpClient() # the command line client syn_cmd = SynapseCmd(http_client, server_url, identity_server_url, username, token) # load synapse.json config from a previous session global CONFIG_JSON CONFIG_JSON = config_path # bit cheeky, but just overwrite the global try: with open(config_path, 'r') as config: syn_cmd.config = json.load(config) try: http_client.verbose = "on" == syn_cmd.config["verbose"] except: pass print "Loaded config from %s" % config_path except: pass # Twisted-specific: Runs the command processor in Twisted's event loop # to maintain a single thread for both commands and event processing. # If using another HTTP client, just call syn_cmd.cmdloop() reactor.callInThread(syn_cmd.cmdloop) reactor.run() if __name__ == '__main__': parser = argparse.ArgumentParser("Starts a synapse client.") parser.add_argument( "-s", "--server", dest="server", default="http://localhost:8008", help="The URL of the home server to talk to.") parser.add_argument( "-i", "--identity-server", dest="identityserver", default="http://localhost:8090", help="The URL of the identity server to talk to.") parser.add_argument( "-u", "--username", dest="username", help="Your username on the server.") parser.add_argument( "-t", "--token", dest="token", help="Your access token.") parser.add_argument( "-c", "--config", dest="config", default=CONFIG_JSON, help="The location of the config.json file to read from.") args = parser.parse_args() if not args.server: print "You must supply a server URL to communicate with." parser.print_help() sys.exit(1) server = args.server if not server.startswith("http://"): server = "http://" + args.server main(server, args.identityserver, args.username, args.token, args.config) synapse-0.24.0/contrib/cmdclient/http.py000066400000000000000000000147321317335640100202240ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.web.client import Agent, readBody from twisted.web.http_headers import Headers from twisted.internet import defer, reactor from pprint import pformat import json import urllib class HttpClient(object): """ Interface for talking json over http """ def put_json(self, url, data): """ Sends the specifed json data using PUT Args: url (str): The URL to PUT data to. data (dict): A dict containing the data that will be used as the request body. This will be encoded as JSON. Returns: Deferred: Succeeds when we get a 2xx HTTP response. The result will be the decoded JSON body. """ pass def get_json(self, url, args=None): """ Gets some json from the given host homeserver and path Args: url (str): The URL to GET data from. args (dict): A dictionary used to create query strings, defaults to None. **Note**: The value of each key is assumed to be an iterable and *not* a string. Returns: Deferred: Succeeds when we get a 2xx HTTP response. The result will be the decoded JSON body. """ pass class TwistedHttpClient(HttpClient): """ Wrapper around the twisted HTTP client api. Attributes: agent (twisted.web.client.Agent): The twisted Agent used to send the requests. """ def __init__(self): self.agent = Agent(reactor) @defer.inlineCallbacks def put_json(self, url, data): response = yield self._create_put_request( url, data, headers_dict={"Content-Type": ["application/json"]} ) body = yield readBody(response) defer.returnValue((response.code, body)) @defer.inlineCallbacks def get_json(self, url, args=None): if args: # generates a list of strings of form "k=v". qs = urllib.urlencode(args, True) url = "%s?%s" % (url, qs) response = yield self._create_get_request(url) body = yield readBody(response) defer.returnValue(json.loads(body)) def _create_put_request(self, url, json_data, headers_dict={}): """ Wrapper of _create_request to issue a PUT request """ if "Content-Type" not in headers_dict: raise defer.error( RuntimeError("Must include Content-Type header for PUTs")) return self._create_request( "PUT", url, producer=_JsonProducer(json_data), headers_dict=headers_dict ) def _create_get_request(self, url, headers_dict={}): """ Wrapper of _create_request to issue a GET request """ return self._create_request( "GET", url, headers_dict=headers_dict ) @defer.inlineCallbacks def do_request(self, method, url, data=None, qparams=None, jsonreq=True, headers={}): if qparams: url = "%s?%s" % (url, urllib.urlencode(qparams, True)) if jsonreq: prod = _JsonProducer(data) headers['Content-Type'] = ["application/json"]; else: prod = _RawProducer(data) if method in ["POST", "PUT"]: response = yield self._create_request(method, url, producer=prod, headers_dict=headers) else: response = yield self._create_request(method, url) body = yield readBody(response) defer.returnValue(json.loads(body)) @defer.inlineCallbacks def _create_request(self, method, url, producer=None, headers_dict={}): """ Creates and sends a request to the given url """ headers_dict["User-Agent"] = ["Synapse Cmd Client"] retries_left = 5 print "%s to %s with headers %s" % (method, url, headers_dict) if self.verbose and producer: if "password" in producer.data: temp = producer.data["password"] producer.data["password"] = "[REDACTED]" print json.dumps(producer.data, indent=4) producer.data["password"] = temp else: print json.dumps(producer.data, indent=4) while True: try: response = yield self.agent.request( method, url.encode("UTF8"), Headers(headers_dict), producer ) break except Exception as e: print "uh oh: %s" % e if retries_left: yield self.sleep(2 ** (5 - retries_left)) retries_left -= 1 else: raise e if self.verbose: print "Status %s %s" % (response.code, response.phrase) print pformat(list(response.headers.getAllRawHeaders())) defer.returnValue(response) def sleep(self, seconds): d = defer.Deferred() reactor.callLater(seconds, d.callback, seconds) return d class _RawProducer(object): def __init__(self, data): self.data = data self.body = data self.length = len(self.body) def startProducing(self, consumer): consumer.write(self.body) return defer.succeed(None) def pauseProducing(self): pass def stopProducing(self): pass class _JsonProducer(object): """ Used by the twisted http client to create the HTTP body from json """ def __init__(self, jsn): self.data = jsn self.body = json.dumps(jsn).encode("utf8") self.length = len(self.body) def startProducing(self, consumer): consumer.write(self.body) return defer.succeed(None) def pauseProducing(self): pass def stopProducing(self): pass synapse-0.24.0/contrib/example_log_config.yaml000066400000000000000000000023571317335640100214360ustar00rootroot00000000000000# Example log_config file for synapse. To enable, point `log_config` to it in # `homeserver.yaml`, and restart synapse. # # This configuration will produce similar results to the defaults within # synapse, but can be edited to give more flexibility. version: 1 formatters: fmt: format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s- %(message)s' filters: context: (): synapse.util.logcontext.LoggingContextFilter request: "" handlers: # example output to console console: class: logging.StreamHandler filters: [context] # example output to file - to enable, edit 'root' config below. file: class: logging.handlers.RotatingFileHandler formatter: fmt filename: /var/log/synapse/homeserver.log maxBytes: 100000000 backupCount: 3 filters: [context] root: level: INFO handlers: [console] # to use file handler instead, switch to [file] loggers: synapse: level: INFO synapse.storage.SQL: # beware: increasing this to DEBUG will make synapse log sensitive # information such as access tokens. level: INFO # example of enabling debugging for a component: # # synapse.federation.transport.server: # level: DEBUG synapse-0.24.0/contrib/experiments/000077500000000000000000000000001317335640100172655ustar00rootroot00000000000000synapse-0.24.0/contrib/experiments/cursesio.py000066400000000000000000000103531317335640100214750ustar00rootroot00000000000000# Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import curses import curses.wrapper from curses.ascii import isprint from twisted.internet import reactor class CursesStdIO(): def __init__(self, stdscr, callback=None): self.statusText = "Synapse test app -" self.searchText = '' self.stdscr = stdscr self.logLine = '' self.callback = callback self._setup() def _setup(self): self.stdscr.nodelay(1) # Make non blocking self.rows, self.cols = self.stdscr.getmaxyx() self.lines = [] curses.use_default_colors() self.paintStatus(self.statusText) self.stdscr.refresh() def set_callback(self, callback): self.callback = callback def fileno(self): """ We want to select on FD 0 """ return 0 def connectionLost(self, reason): self.close() def print_line(self, text): """ add a line to the internal list of lines""" self.lines.append(text) self.redraw() def print_log(self, text): self.logLine = text self.redraw() def redraw(self): """ method for redisplaying lines based on internal list of lines """ self.stdscr.clear() self.paintStatus(self.statusText) i = 0 index = len(self.lines) - 1 while i < (self.rows - 3) and index >= 0: self.stdscr.addstr(self.rows - 3 - i, 0, self.lines[index], curses.A_NORMAL) i = i + 1 index = index - 1 self.printLogLine(self.logLine) self.stdscr.refresh() def paintStatus(self, text): if len(text) > self.cols: raise RuntimeError("TextTooLongError") self.stdscr.addstr( self.rows - 2, 0, text + ' ' * (self.cols - len(text)), curses.A_STANDOUT) def printLogLine(self, text): self.stdscr.addstr( 0, 0, text + ' ' * (self.cols - len(text)), curses.A_STANDOUT) def doRead(self): """ Input is ready! """ curses.noecho() c = self.stdscr.getch() # read a character if c == curses.KEY_BACKSPACE: self.searchText = self.searchText[:-1] elif c == curses.KEY_ENTER or c == 10: text = self.searchText self.searchText = '' self.print_line(">> %s" % text) try: if self.callback: self.callback.on_line(text) except Exception as e: self.print_line(str(e)) self.stdscr.refresh() elif isprint(c): if len(self.searchText) == self.cols - 2: return self.searchText = self.searchText + chr(c) self.stdscr.addstr(self.rows - 1, 0, self.searchText + (' ' * ( self.cols - len(self.searchText) - 2))) self.paintStatus(self.statusText + ' %d' % len(self.searchText)) self.stdscr.move(self.rows - 1, len(self.searchText)) self.stdscr.refresh() def logPrefix(self): return "CursesStdIO" def close(self): """ clean up """ curses.nocbreak() self.stdscr.keypad(0) curses.echo() curses.endwin() class Callback(object): def __init__(self, stdio): self.stdio = stdio def on_line(self, text): self.stdio.print_line(text) def main(stdscr): screen = CursesStdIO(stdscr) # create Screen object callback = Callback(screen) screen.set_callback(callback) stdscr.refresh() reactor.addReader(screen) reactor.run() screen.close() if __name__ == '__main__': curses.wrapper(main) synapse-0.24.0/contrib/experiments/test_messaging.py000066400000000000000000000270051317335640100226570ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This is an example of using the server to server implementation to do a basic chat style thing. It accepts commands from stdin and outputs to stdout. It assumes that ucids are of the form @, and uses as the address of the remote home server to hit. Usage: python test_messaging.py Currently assumes the local address is localhost: """ from synapse.federation import ( ReplicationHandler ) from synapse.federation.units import Pdu from synapse.util import origin_from_ucid from synapse.app.homeserver import SynapseHomeServer #from synapse.util.logutils import log_function from twisted.internet import reactor, defer from twisted.python import log import argparse import json import logging import os import re import cursesio import curses.wrapper logger = logging.getLogger("example") def excpetion_errback(failure): logging.exception(failure) class InputOutput(object): """ This is responsible for basic I/O so that a user can interact with the example app. """ def __init__(self, screen, user): self.screen = screen self.user = user def set_home_server(self, server): self.server = server def on_line(self, line): """ This is where we process commands. """ try: m = re.match("^join (\S+)$", line) if m: # The `sender` wants to join a room. room_name, = m.groups() self.print_line("%s joining %s" % (self.user, room_name)) self.server.join_room(room_name, self.user, self.user) #self.print_line("OK.") return m = re.match("^invite (\S+) (\S+)$", line) if m: # `sender` wants to invite someone to a room room_name, invitee = m.groups() self.print_line("%s invited to %s" % (invitee, room_name)) self.server.invite_to_room(room_name, self.user, invitee) #self.print_line("OK.") return m = re.match("^send (\S+) (.*)$", line) if m: # `sender` wants to message a room room_name, body = m.groups() self.print_line("%s send to %s" % (self.user, room_name)) self.server.send_message(room_name, self.user, body) #self.print_line("OK.") return m = re.match("^backfill (\S+)$", line) if m: # we want to backfill a room room_name, = m.groups() self.print_line("backfill %s" % room_name) self.server.backfill(room_name) return self.print_line("Unrecognized command") except Exception as e: logger.exception(e) def print_line(self, text): self.screen.print_line(text) def print_log(self, text): self.screen.print_log(text) class IOLoggerHandler(logging.Handler): def __init__(self, io): logging.Handler.__init__(self) self.io = io def emit(self, record): if record.levelno < logging.WARN: return msg = self.format(record) self.io.print_log(msg) class Room(object): """ Used to store (in memory) the current membership state of a room, and which home servers we should send PDUs associated with the room to. """ def __init__(self, room_name): self.room_name = room_name self.invited = set() self.participants = set() self.servers = set() self.oldest_server = None self.have_got_metadata = False def add_participant(self, participant): """ Someone has joined the room """ self.participants.add(participant) self.invited.discard(participant) server = origin_from_ucid(participant) self.servers.add(server) if not self.oldest_server: self.oldest_server = server def add_invited(self, invitee): """ Someone has been invited to the room """ self.invited.add(invitee) self.servers.add(origin_from_ucid(invitee)) class HomeServer(ReplicationHandler): """ A very basic home server implentation that allows people to join a room and then invite other people. """ def __init__(self, server_name, replication_layer, output): self.server_name = server_name self.replication_layer = replication_layer self.replication_layer.set_handler(self) self.joined_rooms = {} self.output = output def on_receive_pdu(self, pdu): """ We just received a PDU """ pdu_type = pdu.pdu_type if pdu_type == "sy.room.message": self._on_message(pdu) elif pdu_type == "sy.room.member" and "membership" in pdu.content: if pdu.content["membership"] == "join": self._on_join(pdu.context, pdu.state_key) elif pdu.content["membership"] == "invite": self._on_invite(pdu.origin, pdu.context, pdu.state_key) else: self.output.print_line("#%s (unrec) %s = %s" % (pdu.context, pdu.pdu_type, json.dumps(pdu.content)) ) #def on_state_change(self, pdu): ##self.output.print_line("#%s (state) %s *** %s" % ##(pdu.context, pdu.state_key, pdu.pdu_type) ##) #if "joinee" in pdu.content: #self._on_join(pdu.context, pdu.content["joinee"]) #elif "invitee" in pdu.content: #self._on_invite(pdu.origin, pdu.context, pdu.content["invitee"]) def _on_message(self, pdu): """ We received a message """ self.output.print_line("#%s %s %s" % (pdu.context, pdu.content["sender"], pdu.content["body"]) ) def _on_join(self, context, joinee): """ Someone has joined a room, either a remote user or a local user """ room = self._get_or_create_room(context) room.add_participant(joinee) self.output.print_line("#%s %s %s" % (context, joinee, "*** JOINED") ) def _on_invite(self, origin, context, invitee): """ Someone has been invited """ room = self._get_or_create_room(context) room.add_invited(invitee) self.output.print_line("#%s %s %s" % (context, invitee, "*** INVITED") ) if not room.have_got_metadata and origin is not self.server_name: logger.debug("Get room state") self.replication_layer.get_state_for_context(origin, context) room.have_got_metadata = True @defer.inlineCallbacks def send_message(self, room_name, sender, body): """ Send a message to a room! """ destinations = yield self.get_servers_for_context(room_name) try: yield self.replication_layer.send_pdu( Pdu.create_new( context=room_name, pdu_type="sy.room.message", content={"sender": sender, "body": body}, origin=self.server_name, destinations=destinations, ) ) except Exception as e: logger.exception(e) @defer.inlineCallbacks def join_room(self, room_name, sender, joinee): """ Join a room! """ self._on_join(room_name, joinee) destinations = yield self.get_servers_for_context(room_name) try: pdu = Pdu.create_new( context=room_name, pdu_type="sy.room.member", is_state=True, state_key=joinee, content={"membership": "join"}, origin=self.server_name, destinations=destinations, ) yield self.replication_layer.send_pdu(pdu) except Exception as e: logger.exception(e) @defer.inlineCallbacks def invite_to_room(self, room_name, sender, invitee): """ Invite someone to a room! """ self._on_invite(self.server_name, room_name, invitee) destinations = yield self.get_servers_for_context(room_name) try: yield self.replication_layer.send_pdu( Pdu.create_new( context=room_name, is_state=True, pdu_type="sy.room.member", state_key=invitee, content={"membership": "invite"}, origin=self.server_name, destinations=destinations, ) ) except Exception as e: logger.exception(e) def backfill(self, room_name, limit=5): room = self.joined_rooms.get(room_name) if not room: return dest = room.oldest_server return self.replication_layer.backfill(dest, room_name, limit) def _get_room_remote_servers(self, room_name): return [i for i in self.joined_rooms.setdefault(room_name,).servers] def _get_or_create_room(self, room_name): return self.joined_rooms.setdefault(room_name, Room(room_name)) def get_servers_for_context(self, context): return defer.succeed( self.joined_rooms.setdefault(context, Room(context)).servers ) def main(stdscr): parser = argparse.ArgumentParser() parser.add_argument('user', type=str) parser.add_argument('-v', '--verbose', action='count') args = parser.parse_args() user = args.user server_name = origin_from_ucid(user) ## Set up logging ## root_logger = logging.getLogger() formatter = logging.Formatter('%(asctime)s - %(name)s - %(lineno)d - ' '%(levelname)s - %(message)s') if not os.path.exists("logs"): os.makedirs("logs") fh = logging.FileHandler("logs/%s" % user) fh.setFormatter(formatter) root_logger.addHandler(fh) root_logger.setLevel(logging.DEBUG) # Hack: The only way to get it to stop logging to sys.stderr :( log.theLogPublisher.observers = [] observer = log.PythonLoggingObserver() observer.start() ## Set up synapse server curses_stdio = cursesio.CursesStdIO(stdscr) input_output = InputOutput(curses_stdio, user) curses_stdio.set_callback(input_output) app_hs = SynapseHomeServer(server_name, db_name="dbs/%s" % user) replication = app_hs.get_replication_layer() hs = HomeServer(server_name, replication, curses_stdio) input_output.set_home_server(hs) ## Add input_output logger io_logger = IOLoggerHandler(input_output) io_logger.setFormatter(formatter) root_logger.addHandler(io_logger) ## Start! ## try: port = int(server_name.split(":")[1]) except: port = 12345 app_hs.get_http_server().start_listening(port) reactor.addReader(curses_stdio) reactor.run() if __name__ == "__main__": curses.wrapper(main) synapse-0.24.0/contrib/graph/000077500000000000000000000000001317335640100160235ustar00rootroot00000000000000synapse-0.24.0/contrib/graph/graph.py000066400000000000000000000104221317335640100174750ustar00rootroot00000000000000# Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import sqlite3 import pydot import cgi import json import datetime import argparse import urllib2 def make_name(pdu_id, origin): return "%s@%s" % (pdu_id, origin) def make_graph(pdus, room, filename_prefix): pdu_map = {} node_map = {} origins = set() colors = set(("red", "green", "blue", "yellow", "purple")) for pdu in pdus: origins.add(pdu.get("origin")) color_map = {color: color for color in colors if color in origins} colors -= set(color_map.values()) color_map[None] = "black" for o in origins: if o in color_map: continue try: c = colors.pop() color_map[o] = c except: print "Run out of colours!" color_map[o] = "black" graph = pydot.Dot(graph_name="Test") for pdu in pdus: name = make_name(pdu.get("pdu_id"), pdu.get("origin")) pdu_map[name] = pdu t = datetime.datetime.fromtimestamp( float(pdu["ts"]) / 1000 ).strftime('%Y-%m-%d %H:%M:%S,%f') label = ( "<" "%(name)s
" "Type: %(type)s
" "State key: %(state_key)s
" "Content: %(content)s
" "Time: %(time)s
" "Depth: %(depth)s
" ">" ) % { "name": name, "type": pdu.get("pdu_type"), "state_key": pdu.get("state_key"), "content": cgi.escape(json.dumps(pdu.get("content")), quote=True), "time": t, "depth": pdu.get("depth"), } node = pydot.Node( name=name, label=label, color=color_map[pdu.get("origin")] ) node_map[name] = node graph.add_node(node) for pdu in pdus: start_name = make_name(pdu.get("pdu_id"), pdu.get("origin")) for i, o in pdu.get("prev_pdus", []): end_name = make_name(i, o) if end_name not in node_map: print "%s not in nodes" % end_name continue edge = pydot.Edge(node_map[start_name], node_map[end_name]) graph.add_edge(edge) # Add prev_state edges, if they exist if pdu.get("prev_state_id") and pdu.get("prev_state_origin"): prev_state_name = make_name( pdu.get("prev_state_id"), pdu.get("prev_state_origin") ) if prev_state_name in node_map: state_edge = pydot.Edge( node_map[start_name], node_map[prev_state_name], style='dotted' ) graph.add_edge(state_edge) graph.write('%s.dot' % filename_prefix, format='raw', prog='dot') # graph.write_png("%s.png" % filename_prefix, prog='dot') graph.write_svg("%s.svg" % filename_prefix, prog='dot') def get_pdus(host, room): transaction = json.loads( urllib2.urlopen( "http://%s/_matrix/federation/v1/context/%s/" % (host, room) ).read() ) return transaction["pdus"] if __name__ == "__main__": parser = argparse.ArgumentParser( description="Generate a PDU graph for a given room by talking " "to the given homeserver to get the list of PDUs. \n" "Requires pydot." ) parser.add_argument( "-p", "--prefix", dest="prefix", help="String to prefix output files with" ) parser.add_argument('host') parser.add_argument('room') args = parser.parse_args() host = args.host room = args.room prefix = args.prefix if args.prefix else "%s_graph" % (room) pdus = get_pdus(host, room) make_graph(pdus, room, prefix) synapse-0.24.0/contrib/graph/graph2.py000066400000000000000000000105371317335640100175660ustar00rootroot00000000000000# Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import sqlite3 import pydot import cgi import json import datetime import argparse from synapse.events import FrozenEvent from synapse.util.frozenutils import unfreeze def make_graph(db_name, room_id, file_prefix, limit): conn = sqlite3.connect(db_name) sql = ( "SELECT json FROM event_json as j " "INNER JOIN events as e ON e.event_id = j.event_id " "WHERE j.room_id = ?" ) args = [room_id] if limit: sql += ( " ORDER BY topological_ordering DESC, stream_ordering DESC " "LIMIT ?" ) args.append(limit) c = conn.execute(sql, args) events = [FrozenEvent(json.loads(e[0])) for e in c.fetchall()] events.sort(key=lambda e: e.depth) node_map = {} state_groups = {} graph = pydot.Dot(graph_name="Test") for event in events: c = conn.execute( "SELECT state_group FROM event_to_state_groups " "WHERE event_id = ?", (event.event_id,) ) res = c.fetchone() state_group = res[0] if res else None if state_group is not None: state_groups.setdefault(state_group, []).append(event.event_id) t = datetime.datetime.fromtimestamp( float(event.origin_server_ts) / 1000 ).strftime('%Y-%m-%d %H:%M:%S,%f') content = json.dumps(unfreeze(event.get_dict()["content"])) label = ( "<" "%(name)s
" "Type: %(type)s
" "State key: %(state_key)s
" "Content: %(content)s
" "Time: %(time)s
" "Depth: %(depth)s
" "State group: %(state_group)s
" ">" ) % { "name": event.event_id, "type": event.type, "state_key": event.get("state_key", None), "content": cgi.escape(content, quote=True), "time": t, "depth": event.depth, "state_group": state_group, } node = pydot.Node( name=event.event_id, label=label, ) node_map[event.event_id] = node graph.add_node(node) for event in events: for prev_id, _ in event.prev_events: try: end_node = node_map[prev_id] except: end_node = pydot.Node( name=prev_id, label="<%s>" % (prev_id,), ) node_map[prev_id] = end_node graph.add_node(end_node) edge = pydot.Edge(node_map[event.event_id], end_node) graph.add_edge(edge) for group, event_ids in state_groups.items(): if len(event_ids) <= 1: continue cluster = pydot.Cluster( str(group), label="" % (str(group),) ) for event_id in event_ids: cluster.add_node(node_map[event_id]) graph.add_subgraph(cluster) graph.write('%s.dot' % file_prefix, format='raw', prog='dot') graph.write_svg("%s.svg" % file_prefix, prog='dot') if __name__ == "__main__": parser = argparse.ArgumentParser( description="Generate a PDU graph for a given room by talking " "to the given homeserver to get the list of PDUs. \n" "Requires pydot." ) parser.add_argument( "-p", "--prefix", dest="prefix", help="String to prefix output files with", default="graph_output" ) parser.add_argument( "-l", "--limit", help="Only retrieve the last N events.", ) parser.add_argument('db') parser.add_argument('room') args = parser.parse_args() make_graph(args.db, args.room, args.prefix, args.limit) synapse-0.24.0/contrib/graph/graph3.py000066400000000000000000000102171317335640100175620ustar00rootroot00000000000000# Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import pydot import cgi import simplejson as json import datetime import argparse from synapse.events import FrozenEvent from synapse.util.frozenutils import unfreeze def make_graph(file_name, room_id, file_prefix, limit): print "Reading lines" with open(file_name) as f: lines = f.readlines() print "Read lines" events = [FrozenEvent(json.loads(line)) for line in lines] print "Loaded events." events.sort(key=lambda e: e.depth) print "Sorted events" if limit: events = events[-int(limit):] node_map = {} graph = pydot.Dot(graph_name="Test") for event in events: t = datetime.datetime.fromtimestamp( float(event.origin_server_ts) / 1000 ).strftime('%Y-%m-%d %H:%M:%S,%f') content = json.dumps(unfreeze(event.get_dict()["content"]), indent=4) content = content.replace("\n", "
\n") print content content = [] for key, value in unfreeze(event.get_dict()["content"]).items(): if value is None: value = "" elif isinstance(value, basestring): pass else: value = json.dumps(value) content.append( "%s: %s," % ( cgi.escape(key, quote=True).encode("ascii", 'xmlcharrefreplace'), cgi.escape(value, quote=True).encode("ascii", 'xmlcharrefreplace'), ) ) content = "
\n".join(content) print content label = ( "<" "%(name)s
" "Type: %(type)s
" "State key: %(state_key)s
" "Content: %(content)s
" "Time: %(time)s
" "Depth: %(depth)s
" ">" ) % { "name": event.event_id, "type": event.type, "state_key": event.get("state_key", None), "content": content, "time": t, "depth": event.depth, } node = pydot.Node( name=event.event_id, label=label, ) node_map[event.event_id] = node graph.add_node(node) print "Created Nodes" for event in events: for prev_id, _ in event.prev_events: try: end_node = node_map[prev_id] except: end_node = pydot.Node( name=prev_id, label="<%s>" % (prev_id,), ) node_map[prev_id] = end_node graph.add_node(end_node) edge = pydot.Edge(node_map[event.event_id], end_node) graph.add_edge(edge) print "Created edges" graph.write('%s.dot' % file_prefix, format='raw', prog='dot') print "Created Dot" graph.write_svg("%s.svg" % file_prefix, prog='dot') print "Created svg" if __name__ == "__main__": parser = argparse.ArgumentParser( description="Generate a PDU graph for a given room by reading " "from a file with line deliminated events. \n" "Requires pydot." ) parser.add_argument( "-p", "--prefix", dest="prefix", help="String to prefix output files with", default="graph_output" ) parser.add_argument( "-l", "--limit", help="Only retrieve the last N events.", ) parser.add_argument('event_file') parser.add_argument('room') args = parser.parse_args() make_graph(args.event_file, args.room, args.prefix, args.limit) synapse-0.24.0/contrib/jitsimeetbridge/000077500000000000000000000000001317335640100200745ustar00rootroot00000000000000synapse-0.24.0/contrib/jitsimeetbridge/jitsimeetbridge.py000066400000000000000000000243751317335640100236330ustar00rootroot00000000000000#!/usr/bin/env python """ This is an attempt at bridging matrix clients into a Jitis meet room via Matrix video call. It uses hard-coded xml strings overg XMPP BOSH. It can display one of the streams from the Jitsi bridge until the second lot of SDP comes down and we set the remote SDP at which point the stream ends. Our video never gets to the bridge. Requires: npm install jquery jsdom """ import gevent import grequests from BeautifulSoup import BeautifulSoup import json import urllib import subprocess import time #ACCESS_TOKEN="" # MATRIXBASE = 'https://matrix.org/_matrix/client/api/v1/' MYUSERNAME = '@davetest:matrix.org' HTTPBIND = 'https://meet.jit.si/http-bind' #HTTPBIND = 'https://jitsi.vuc.me/http-bind' #ROOMNAME = "matrix" ROOMNAME = "pibble" HOST="guest.jit.si" #HOST="jitsi.vuc.me" TURNSERVER="turn.guest.jit.si" #TURNSERVER="turn.jitsi.vuc.me" ROOMDOMAIN="meet.jit.si" #ROOMDOMAIN="conference.jitsi.vuc.me" class TrivialMatrixClient: def __init__(self, access_token): self.token = None self.access_token = access_token def getEvent(self): while True: url = MATRIXBASE+'events?access_token='+self.access_token+"&timeout=60000" if self.token: url += "&from="+self.token req = grequests.get(url) resps = grequests.map([req]) obj = json.loads(resps[0].content) print "incoming from matrix",obj if 'end' not in obj: continue self.token = obj['end'] if len(obj['chunk']): return obj['chunk'][0] def joinRoom(self, roomId): url = MATRIXBASE+'rooms/'+roomId+'/join?access_token='+self.access_token print url headers={ 'Content-Type': 'application/json' } req = grequests.post(url, headers=headers, data='{}') resps = grequests.map([req]) obj = json.loads(resps[0].content) print "response: ",obj def sendEvent(self, roomId, evType, event): url = MATRIXBASE+'rooms/'+roomId+'/send/'+evType+'?access_token='+self.access_token print url print json.dumps(event) headers={ 'Content-Type': 'application/json' } req = grequests.post(url, headers=headers, data=json.dumps(event)) resps = grequests.map([req]) obj = json.loads(resps[0].content) print "response: ",obj xmppClients = {} def matrixLoop(): while True: ev = matrixCli.getEvent() print ev if ev['type'] == 'm.room.member': print 'membership event' if ev['membership'] == 'invite' and ev['state_key'] == MYUSERNAME: roomId = ev['room_id'] print "joining room %s" % (roomId) matrixCli.joinRoom(roomId) elif ev['type'] == 'm.room.message': if ev['room_id'] in xmppClients: print "already have a bridge for that user, ignoring" continue print "got message, connecting" xmppClients[ev['room_id']] = TrivialXmppClient(ev['room_id'], ev['user_id']) gevent.spawn(xmppClients[ev['room_id']].xmppLoop) elif ev['type'] == 'm.call.invite': print "Incoming call" #sdp = ev['content']['offer']['sdp'] #print "sdp: %s" % (sdp) #xmppClients[ev['room_id']] = TrivialXmppClient(ev['room_id'], ev['user_id']) #gevent.spawn(xmppClients[ev['room_id']].xmppLoop) elif ev['type'] == 'm.call.answer': print "Call answered" sdp = ev['content']['answer']['sdp'] if ev['room_id'] not in xmppClients: print "We didn't have a call for that room" continue # should probably check call ID too xmppCli = xmppClients[ev['room_id']] xmppCli.sendAnswer(sdp) elif ev['type'] == 'm.call.hangup': if ev['room_id'] in xmppClients: xmppClients[ev['room_id']].stop() del xmppClients[ev['room_id']] class TrivialXmppClient: def __init__(self, matrixRoom, userId): self.rid = 0 self.matrixRoom = matrixRoom self.userId = userId self.running = True def stop(self): self.running = False def nextRid(self): self.rid += 1 return '%d' % (self.rid) def sendIq(self, xml): fullXml = "%s" % (self.nextRid(), self.sid, xml) #print "\t>>>%s" % (fullXml) return self.xmppPoke(fullXml) def xmppPoke(self, xml): headers = {'Content-Type': 'application/xml'} req = grequests.post(HTTPBIND, verify=False, headers=headers, data=xml) resps = grequests.map([req]) obj = BeautifulSoup(resps[0].content) return obj def sendAnswer(self, answer): print "sdp from matrix client",answer p = subprocess.Popen(['node', 'unjingle/unjingle.js', '--sdp'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) jingle, out_err = p.communicate(answer) jingle = jingle % { 'tojid': self.callfrom, 'action': 'session-accept', 'initiator': self.callfrom, 'responder': self.jid, 'sid': self.callsid } print "answer jingle from sdp",jingle res = self.sendIq(jingle) print "reply from answer: ",res self.ssrcs = {} jingleSoup = BeautifulSoup(jingle) for cont in jingleSoup.iq.jingle.findAll('content'): if cont.description: self.ssrcs[cont['name']] = cont.description['ssrc'] print "my ssrcs:",self.ssrcs gevent.joinall([ gevent.spawn(self.advertiseSsrcs) ]) def advertiseSsrcs(self): time.sleep(7) print "SSRC spammer started" while self.running: ssrcMsg = "%(nick)s" % { 'tojid': "%s@%s/%s" % (ROOMNAME, ROOMDOMAIN, self.shortJid), 'nick': self.userId, 'assrc': self.ssrcs['audio'], 'vssrc': self.ssrcs['video'] } res = self.sendIq(ssrcMsg) print "reply from ssrc announce: ",res time.sleep(10) def xmppLoop(self): self.matrixCallId = time.time() res = self.xmppPoke("" % (self.nextRid(), HOST)) print res self.sid = res.body['sid'] print "sid %s" % (self.sid) res = self.sendIq("") res = self.xmppPoke("" % (self.nextRid(), self.sid, HOST)) res = self.sendIq("") print res self.jid = res.body.iq.bind.jid.string print "jid: %s" % (self.jid) self.shortJid = self.jid.split('-')[0] res = self.sendIq("") #randomthing = res.body.iq['to'] #whatsitpart = randomthing.split('-')[0] #print "other random bind thing: %s" % (randomthing) # advertise preence to the jitsi room, with our nick res = self.sendIq("%s" % (HOST, TURNSERVER, ROOMNAME, ROOMDOMAIN, self.userId)) self.muc = {'users': []} for p in res.body.findAll('presence'): u = {} u['shortJid'] = p['from'].split('/')[1] if p.c and p.c.nick: u['nick'] = p.c.nick.string self.muc['users'].append(u) print "muc: ",self.muc # wait for stuff while True: print "waiting..." res = self.sendIq("") print "got from stream: ",res if res.body.iq: jingles = res.body.iq.findAll('jingle') if len(jingles): self.callfrom = res.body.iq['from'] self.handleInvite(jingles[0]) elif 'type' in res.body and res.body['type'] == 'terminate': self.running = False del xmppClients[self.matrixRoom] return def handleInvite(self, jingle): self.initiator = jingle['initiator'] self.callsid = jingle['sid'] p = subprocess.Popen(['node', 'unjingle/unjingle.js', '--jingle'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) print "raw jingle invite",str(jingle) sdp, out_err = p.communicate(str(jingle)) print "transformed remote offer sdp",sdp inviteEvent = { 'offer': { 'type': 'offer', 'sdp': sdp }, 'call_id': self.matrixCallId, 'version': 0, 'lifetime': 30000 } matrixCli.sendEvent(self.matrixRoom, 'm.call.invite', inviteEvent) matrixCli = TrivialMatrixClient(ACCESS_TOKEN) gevent.joinall([ gevent.spawn(matrixLoop) ]) synapse-0.24.0/contrib/jitsimeetbridge/syweb-jitsi-conference.patch000066400000000000000000000163501317335640100255000ustar00rootroot00000000000000diff --git a/syweb/webclient/app/components/matrix/matrix-call.js b/syweb/webclient/app/components/matrix/matrix-call.js index 9fbfff0..dc68077 100644 --- a/syweb/webclient/app/components/matrix/matrix-call.js +++ b/syweb/webclient/app/components/matrix/matrix-call.js @@ -16,6 +16,45 @@ limitations under the License. 'use strict'; + +function sendKeyframe(pc) { + console.log('sendkeyframe', pc.iceConnectionState); + if (pc.iceConnectionState !== 'connected') return; // safe... + pc.setRemoteDescription( + pc.remoteDescription, + function () { + pc.createAnswer( + function (modifiedAnswer) { + pc.setLocalDescription( + modifiedAnswer, + function () { + // noop + }, + function (error) { + console.log('triggerKeyframe setLocalDescription failed', error); + messageHandler.showError(); + } + ); + }, + function (error) { + console.log('triggerKeyframe createAnswer failed', error); + messageHandler.showError(); + } + ); + }, + function (error) { + console.log('triggerKeyframe setRemoteDescription failed', error); + messageHandler.showError(); + } + ); +} + + + + + + + var forAllVideoTracksOnStream = function(s, f) { var tracks = s.getVideoTracks(); for (var i = 0; i < tracks.length; i++) { @@ -83,7 +122,7 @@ angular.module('MatrixCall', []) } // FIXME: we should prevent any calls from being placed or accepted before this has finished - MatrixCall.getTurnServer(); + //MatrixCall.getTurnServer(); MatrixCall.CALL_TIMEOUT = 60000; MatrixCall.FALLBACK_STUN_SERVER = 'stun:stun.l.google.com:19302'; @@ -132,6 +171,22 @@ angular.module('MatrixCall', []) pc.onsignalingstatechange = function() { self.onSignallingStateChanged(); }; pc.onicecandidate = function(c) { self.gotLocalIceCandidate(c); }; pc.onaddstream = function(s) { self.onAddStream(s); }; + + var datachan = pc.createDataChannel('RTCDataChannel', { + reliable: false + }); + console.log("data chan: "+datachan); + datachan.onopen = function() { + console.log("data channel open"); + }; + datachan.onmessage = function() { + console.log("data channel message"); + }; + pc.ondatachannel = function(event) { + console.log("have data channel"); + event.channel.binaryType = 'blob'; + }; + return pc; } @@ -200,6 +255,12 @@ angular.module('MatrixCall', []) }, this.msg.lifetime - event.age); }; + MatrixCall.prototype.receivedInvite = function(event) { + console.log("Got second invite for call "+this.call_id); + this.peerConn.setRemoteDescription(new RTCSessionDescription(this.msg.offer), this.onSetRemoteDescriptionSuccess, this.onSetRemoteDescriptionError); + }; + + // perverse as it may seem, sometimes we want to instantiate a call with a hangup message // (because when getting the state of the room on load, events come in reverse order and // we want to remember that a call has been hung up) @@ -349,7 +410,7 @@ angular.module('MatrixCall', []) 'mandatory': { 'OfferToReceiveAudio': true, 'OfferToReceiveVideo': this.type == 'video' - }, + } }; this.peerConn.createAnswer(function(d) { self.createdAnswer(d); }, function(e) {}, constraints); // This can't be in an apply() because it's called by a predecessor call under glare conditions :( @@ -359,8 +420,20 @@ angular.module('MatrixCall', []) MatrixCall.prototype.gotLocalIceCandidate = function(event) { if (event.candidate) { console.log("Got local ICE "+event.candidate.sdpMid+" candidate: "+event.candidate.candidate); - this.sendCandidate(event.candidate); - } + //this.sendCandidate(event.candidate); + } else { + console.log("have all candidates, sending answer"); + var content = { + version: 0, + call_id: this.call_id, + answer: this.peerConn.localDescription + }; + this.sendEventWithRetry('m.call.answer', content); + var self = this; + $rootScope.$apply(function() { + self.state = 'connecting'; + }); + } } MatrixCall.prototype.gotRemoteIceCandidate = function(cand) { @@ -418,15 +491,6 @@ angular.module('MatrixCall', []) console.log("Created answer: "+description); var self = this; this.peerConn.setLocalDescription(description, function() { - var content = { - version: 0, - call_id: self.call_id, - answer: self.peerConn.localDescription - }; - self.sendEventWithRetry('m.call.answer', content); - $rootScope.$apply(function() { - self.state = 'connecting'; - }); }, function() { console.log("Error setting local description!"); } ); }; @@ -448,6 +512,9 @@ angular.module('MatrixCall', []) $rootScope.$apply(function() { self.state = 'connected'; self.didConnect = true; + /*$timeout(function() { + sendKeyframe(self.peerConn); + }, 1000);*/ }); } else if (this.peerConn.iceConnectionState == 'failed') { this.hangup('ice_failed'); @@ -518,6 +585,7 @@ angular.module('MatrixCall', []) MatrixCall.prototype.onRemoteStreamEnded = function(event) { console.log("Remote stream ended"); + return; var self = this; $rootScope.$apply(function() { self.state = 'ended'; diff --git a/syweb/webclient/app/components/matrix/matrix-phone-service.js b/syweb/webclient/app/components/matrix/matrix-phone-service.js index 55dbbf5..272fa27 100644 --- a/syweb/webclient/app/components/matrix/matrix-phone-service.js +++ b/syweb/webclient/app/components/matrix/matrix-phone-service.js @@ -48,6 +48,13 @@ angular.module('matrixPhoneService', []) return; } + // do we already have an entry for this call ID? + var existingEntry = matrixPhoneService.allCalls[msg.call_id]; + if (existingEntry) { + existingEntry.receivedInvite(msg); + return; + } + var call = undefined; if (!isLive) { // if this event wasn't live then this call may already be over @@ -108,7 +115,7 @@ angular.module('matrixPhoneService', []) call.hangup(); } } else { - $rootScope.$broadcast(matrixPhoneService.INCOMING_CALL_EVENT, call); + $rootScope.$broadcast(matrixPhoneService.INCOMING_CALL_EVENT, call); } } else if (event.type == 'm.call.answer') { var call = matrixPhoneService.allCalls[msg.call_id]; synapse-0.24.0/contrib/jitsimeetbridge/unjingle/000077500000000000000000000000001317335640100217075ustar00rootroot00000000000000synapse-0.24.0/contrib/jitsimeetbridge/unjingle/strophe.jingle.sdp.js000066400000000000000000000666251317335640100260040ustar00rootroot00000000000000/* jshint -W117 */ // SDP STUFF function SDP(sdp) { this.media = sdp.split('\r\nm='); for (var i = 1; i < this.media.length; i++) { this.media[i] = 'm=' + this.media[i]; if (i != this.media.length - 1) { this.media[i] += '\r\n'; } } this.session = this.media.shift() + '\r\n'; this.raw = this.session + this.media.join(''); } exports.SDP = SDP; var jsdom = require("jsdom"); var window = jsdom.jsdom().parentWindow; var $ = require('jquery')(window); var SDPUtil = require('./strophe.jingle.sdp.util.js').SDPUtil; /** * Returns map of MediaChannel mapped per channel idx. */ SDP.prototype.getMediaSsrcMap = function() { var self = this; var media_ssrcs = {}; for (channelNum = 0; channelNum < self.media.length; channelNum++) { modified = true; tmp = SDPUtil.find_lines(self.media[channelNum], 'a=ssrc:'); var type = SDPUtil.parse_mid(SDPUtil.find_line(self.media[channelNum], 'a=mid:')); var channel = new MediaChannel(channelNum, type); media_ssrcs[channelNum] = channel; tmp.forEach(function (line) { var linessrc = line.substring(7).split(' ')[0]; // allocate new ChannelSsrc if(!channel.ssrcs[linessrc]) { channel.ssrcs[linessrc] = new ChannelSsrc(linessrc, type); } channel.ssrcs[linessrc].lines.push(line); }); tmp = SDPUtil.find_lines(self.media[channelNum], 'a=ssrc-group:'); tmp.forEach(function(line){ var semantics = line.substr(0, idx).substr(13); var ssrcs = line.substr(14 + semantics.length).split(' '); if (ssrcs.length != 0) { var ssrcGroup = new ChannelSsrcGroup(semantics, ssrcs); channel.ssrcGroups.push(ssrcGroup); } }); } return media_ssrcs; }; /** * Returns true if this SDP contains given SSRC. * @param ssrc the ssrc to check. * @returns {boolean} true if this SDP contains given SSRC. */ SDP.prototype.containsSSRC = function(ssrc) { var channels = this.getMediaSsrcMap(); var contains = false; Object.keys(channels).forEach(function(chNumber){ var channel = channels[chNumber]; //console.log("Check", channel, ssrc); if(Object.keys(channel.ssrcs).indexOf(ssrc) != -1){ contains = true; } }); return contains; }; /** * Returns map of MediaChannel that contains only media not contained in otherSdp. Mapped by channel idx. * @param otherSdp the other SDP to check ssrc with. */ SDP.prototype.getNewMedia = function(otherSdp) { // this could be useful in Array.prototype. function arrayEquals(array) { // if the other array is a falsy value, return if (!array) return false; // compare lengths - can save a lot of time if (this.length != array.length) return false; for (var i = 0, l=this.length; i < l; i++) { // Check if we have nested arrays if (this[i] instanceof Array && array[i] instanceof Array) { // recurse into the nested arrays if (!this[i].equals(array[i])) return false; } else if (this[i] != array[i]) { // Warning - two different object instances will never be equal: {x:20} != {x:20} return false; } } return true; } var myMedia = this.getMediaSsrcMap(); var othersMedia = otherSdp.getMediaSsrcMap(); var newMedia = {}; Object.keys(othersMedia).forEach(function(channelNum) { var myChannel = myMedia[channelNum]; var othersChannel = othersMedia[channelNum]; if(!myChannel && othersChannel) { // Add whole channel newMedia[channelNum] = othersChannel; return; } // Look for new ssrcs accross the channel Object.keys(othersChannel.ssrcs).forEach(function(ssrc) { if(Object.keys(myChannel.ssrcs).indexOf(ssrc) === -1) { // Allocate channel if we've found ssrc that doesn't exist in our channel if(!newMedia[channelNum]){ newMedia[channelNum] = new MediaChannel(othersChannel.chNumber, othersChannel.mediaType); } newMedia[channelNum].ssrcs[ssrc] = othersChannel.ssrcs[ssrc]; } }); // Look for new ssrc groups across the channels othersChannel.ssrcGroups.forEach(function(otherSsrcGroup){ // try to match the other ssrc-group with an ssrc-group of ours var matched = false; for (var i = 0; i < myChannel.ssrcGroups.length; i++) { var mySsrcGroup = myChannel.ssrcGroups[i]; if (otherSsrcGroup.semantics == mySsrcGroup.semantics && arrayEquals.apply(otherSsrcGroup.ssrcs, [mySsrcGroup.ssrcs])) { matched = true; break; } } if (!matched) { // Allocate channel if we've found an ssrc-group that doesn't // exist in our channel if(!newMedia[channelNum]){ newMedia[channelNum] = new MediaChannel(othersChannel.chNumber, othersChannel.mediaType); } newMedia[channelNum].ssrcGroups.push(otherSsrcGroup); } }); }); return newMedia; }; // remove iSAC and CN from SDP SDP.prototype.mangle = function () { var i, j, mline, lines, rtpmap, newdesc; for (i = 0; i < this.media.length; i++) { lines = this.media[i].split('\r\n'); lines.pop(); // remove empty last element mline = SDPUtil.parse_mline(lines.shift()); if (mline.media != 'audio') continue; newdesc = ''; mline.fmt.length = 0; for (j = 0; j < lines.length; j++) { if (lines[j].substr(0, 9) == 'a=rtpmap:') { rtpmap = SDPUtil.parse_rtpmap(lines[j]); if (rtpmap.name == 'CN' || rtpmap.name == 'ISAC') continue; mline.fmt.push(rtpmap.id); newdesc += lines[j] + '\r\n'; } else { newdesc += lines[j] + '\r\n'; } } this.media[i] = SDPUtil.build_mline(mline) + '\r\n'; this.media[i] += newdesc; } this.raw = this.session + this.media.join(''); }; // remove lines matching prefix from session section SDP.prototype.removeSessionLines = function(prefix) { var self = this; var lines = SDPUtil.find_lines(this.session, prefix); lines.forEach(function(line) { self.session = self.session.replace(line + '\r\n', ''); }); this.raw = this.session + this.media.join(''); return lines; } // remove lines matching prefix from a media section specified by mediaindex // TODO: non-numeric mediaindex could match mid SDP.prototype.removeMediaLines = function(mediaindex, prefix) { var self = this; var lines = SDPUtil.find_lines(this.media[mediaindex], prefix); lines.forEach(function(line) { self.media[mediaindex] = self.media[mediaindex].replace(line + '\r\n', ''); }); this.raw = this.session + this.media.join(''); return lines; } // add content's to a jingle element SDP.prototype.toJingle = function (elem, thecreator) { var i, j, k, mline, ssrc, rtpmap, tmp, line, lines; var self = this; // new bundle plan if (SDPUtil.find_line(this.session, 'a=group:')) { lines = SDPUtil.find_lines(this.session, 'a=group:'); for (i = 0; i < lines.length; i++) { tmp = lines[i].split(' '); var semantics = tmp.shift().substr(8); elem.c('group', {xmlns: 'urn:xmpp:jingle:apps:grouping:0', semantics:semantics}); for (j = 0; j < tmp.length; j++) { elem.c('content', {name: tmp[j]}).up(); } elem.up(); } } // old bundle plan, to be removed var bundle = []; if (SDPUtil.find_line(this.session, 'a=group:BUNDLE')) { bundle = SDPUtil.find_line(this.session, 'a=group:BUNDLE ').split(' '); bundle.shift(); } for (i = 0; i < this.media.length; i++) { mline = SDPUtil.parse_mline(this.media[i].split('\r\n')[0]); if (!(mline.media === 'audio' || mline.media === 'video' || mline.media === 'application')) { continue; } if (SDPUtil.find_line(this.media[i], 'a=ssrc:')) { ssrc = SDPUtil.find_line(this.media[i], 'a=ssrc:').substring(7).split(' ')[0]; // take the first } else { ssrc = false; } elem.c('content', {creator: thecreator, name: mline.media}); if (SDPUtil.find_line(this.media[i], 'a=mid:')) { // prefer identifier from a=mid if present var mid = SDPUtil.parse_mid(SDPUtil.find_line(this.media[i], 'a=mid:')); elem.attrs({ name: mid }); // old BUNDLE plan, to be removed if (bundle.indexOf(mid) !== -1) { elem.c('bundle', {xmlns: 'http://estos.de/ns/bundle'}).up(); bundle.splice(bundle.indexOf(mid), 1); } } if (SDPUtil.find_line(this.media[i], 'a=rtpmap:').length) { elem.c('description', {xmlns: 'urn:xmpp:jingle:apps:rtp:1', media: mline.media }); if (ssrc) { elem.attrs({ssrc: ssrc}); } for (j = 0; j < mline.fmt.length; j++) { rtpmap = SDPUtil.find_line(this.media[i], 'a=rtpmap:' + mline.fmt[j]); elem.c('payload-type', SDPUtil.parse_rtpmap(rtpmap)); // put any 'a=fmtp:' + mline.fmt[j] lines into if (SDPUtil.find_line(this.media[i], 'a=fmtp:' + mline.fmt[j])) { tmp = SDPUtil.parse_fmtp(SDPUtil.find_line(this.media[i], 'a=fmtp:' + mline.fmt[j])); for (k = 0; k < tmp.length; k++) { elem.c('parameter', tmp[k]).up(); } } this.RtcpFbToJingle(i, elem, mline.fmt[j]); // XEP-0293 -- map a=rtcp-fb elem.up(); } if (SDPUtil.find_line(this.media[i], 'a=crypto:', this.session)) { elem.c('encryption', {required: 1}); var crypto = SDPUtil.find_lines(this.media[i], 'a=crypto:', this.session); crypto.forEach(function(line) { elem.c('crypto', SDPUtil.parse_crypto(line)).up(); }); elem.up(); // end of encryption } if (ssrc) { // new style mapping elem.c('source', { ssrc: ssrc, xmlns: 'urn:xmpp:jingle:apps:rtp:ssma:0' }); // FIXME: group by ssrc and support multiple different ssrcs var ssrclines = SDPUtil.find_lines(this.media[i], 'a=ssrc:'); ssrclines.forEach(function(line) { idx = line.indexOf(' '); var linessrc = line.substr(0, idx).substr(7); if (linessrc != ssrc) { elem.up(); ssrc = linessrc; elem.c('source', { ssrc: ssrc, xmlns: 'urn:xmpp:jingle:apps:rtp:ssma:0' }); } var kv = line.substr(idx + 1); elem.c('parameter'); if (kv.indexOf(':') == -1) { elem.attrs({ name: kv }); } else { elem.attrs({ name: kv.split(':', 2)[0] }); elem.attrs({ value: kv.split(':', 2)[1] }); } elem.up(); }); elem.up(); // old proprietary mapping, to be removed at some point tmp = SDPUtil.parse_ssrc(this.media[i]); tmp.xmlns = 'http://estos.de/ns/ssrc'; tmp.ssrc = ssrc; elem.c('ssrc', tmp).up(); // ssrc is part of description // XEP-0339 handle ssrc-group attributes var ssrc_group_lines = SDPUtil.find_lines(this.media[i], 'a=ssrc-group:'); ssrc_group_lines.forEach(function(line) { idx = line.indexOf(' '); var semantics = line.substr(0, idx).substr(13); var ssrcs = line.substr(14 + semantics.length).split(' '); if (ssrcs.length != 0) { elem.c('ssrc-group', { semantics: semantics, xmlns: 'urn:xmpp:jingle:apps:rtp:ssma:0' }); ssrcs.forEach(function(ssrc) { elem.c('source', { ssrc: ssrc }) .up(); }); elem.up(); } }); } if (SDPUtil.find_line(this.media[i], 'a=rtcp-mux')) { elem.c('rtcp-mux').up(); } // XEP-0293 -- map a=rtcp-fb:* this.RtcpFbToJingle(i, elem, '*'); // XEP-0294 if (SDPUtil.find_line(this.media[i], 'a=extmap:')) { lines = SDPUtil.find_lines(this.media[i], 'a=extmap:'); for (j = 0; j < lines.length; j++) { tmp = SDPUtil.parse_extmap(lines[j]); elem.c('rtp-hdrext', { xmlns: 'urn:xmpp:jingle:apps:rtp:rtp-hdrext:0', uri: tmp.uri, id: tmp.value }); if (tmp.hasOwnProperty('direction')) { switch (tmp.direction) { case 'sendonly': elem.attrs({senders: 'responder'}); break; case 'recvonly': elem.attrs({senders: 'initiator'}); break; case 'sendrecv': elem.attrs({senders: 'both'}); break; case 'inactive': elem.attrs({senders: 'none'}); break; } } // TODO: handle params elem.up(); } } elem.up(); // end of description } // map ice-ufrag/pwd, dtls fingerprint, candidates this.TransportToJingle(i, elem); if (SDPUtil.find_line(this.media[i], 'a=sendrecv', this.session)) { elem.attrs({senders: 'both'}); } else if (SDPUtil.find_line(this.media[i], 'a=sendonly', this.session)) { elem.attrs({senders: 'initiator'}); } else if (SDPUtil.find_line(this.media[i], 'a=recvonly', this.session)) { elem.attrs({senders: 'responder'}); } else if (SDPUtil.find_line(this.media[i], 'a=inactive', this.session)) { elem.attrs({senders: 'none'}); } if (mline.port == '0') { // estos hack to reject an m-line elem.attrs({senders: 'rejected'}); } elem.up(); // end of content } elem.up(); return elem; }; SDP.prototype.TransportToJingle = function (mediaindex, elem) { var i = mediaindex; var tmp; var self = this; elem.c('transport'); // XEP-0343 DTLS/SCTP if (SDPUtil.find_line(this.media[mediaindex], 'a=sctpmap:').length) { var sctpmap = SDPUtil.find_line( this.media[i], 'a=sctpmap:', self.session); if (sctpmap) { var sctpAttrs = SDPUtil.parse_sctpmap(sctpmap); elem.c('sctpmap', { xmlns: 'urn:xmpp:jingle:transports:dtls-sctp:1', number: sctpAttrs[0], /* SCTP port */ protocol: sctpAttrs[1], /* protocol */ }); // Optional stream count attribute if (sctpAttrs.length > 2) elem.attrs({ streams: sctpAttrs[2]}); elem.up(); } } // XEP-0320 var fingerprints = SDPUtil.find_lines(this.media[mediaindex], 'a=fingerprint:', this.session); fingerprints.forEach(function(line) { tmp = SDPUtil.parse_fingerprint(line); tmp.xmlns = 'urn:xmpp:jingle:apps:dtls:0'; elem.c('fingerprint').t(tmp.fingerprint); delete tmp.fingerprint; line = SDPUtil.find_line(self.media[mediaindex], 'a=setup:', self.session); if (line) { tmp.setup = line.substr(8); } elem.attrs(tmp); elem.up(); // end of fingerprint }); tmp = SDPUtil.iceparams(this.media[mediaindex], this.session); if (tmp) { tmp.xmlns = 'urn:xmpp:jingle:transports:ice-udp:1'; elem.attrs(tmp); // XEP-0176 if (SDPUtil.find_line(this.media[mediaindex], 'a=candidate:', this.session)) { // add any a=candidate lines var lines = SDPUtil.find_lines(this.media[mediaindex], 'a=candidate:', this.session); lines.forEach(function (line) { elem.c('candidate', SDPUtil.candidateToJingle(line)).up(); }); } } elem.up(); // end of transport } SDP.prototype.RtcpFbToJingle = function (mediaindex, elem, payloadtype) { // XEP-0293 var lines = SDPUtil.find_lines(this.media[mediaindex], 'a=rtcp-fb:' + payloadtype); lines.forEach(function (line) { var tmp = SDPUtil.parse_rtcpfb(line); if (tmp.type == 'trr-int') { elem.c('rtcp-fb-trr-int', {xmlns: 'urn:xmpp:jingle:apps:rtp:rtcp-fb:0', value: tmp.params[0]}); elem.up(); } else { elem.c('rtcp-fb', {xmlns: 'urn:xmpp:jingle:apps:rtp:rtcp-fb:0', type: tmp.type}); if (tmp.params.length > 0) { elem.attrs({'subtype': tmp.params[0]}); } elem.up(); } }); }; SDP.prototype.RtcpFbFromJingle = function (elem, payloadtype) { // XEP-0293 var media = ''; var tmp = elem.find('>rtcp-fb-trr-int[xmlns="urn:xmpp:jingle:apps:rtp:rtcp-fb:0"]'); if (tmp.length) { media += 'a=rtcp-fb:' + '*' + ' ' + 'trr-int' + ' '; if (tmp.attr('value')) { media += tmp.attr('value'); } else { media += '0'; } media += '\r\n'; } tmp = elem.find('>rtcp-fb[xmlns="urn:xmpp:jingle:apps:rtp:rtcp-fb:0"]'); tmp.each(function () { media += 'a=rtcp-fb:' + payloadtype + ' ' + $(this).attr('type'); if ($(this).attr('subtype')) { media += ' ' + $(this).attr('subtype'); } media += '\r\n'; }); return media; }; // construct an SDP from a jingle stanza SDP.prototype.fromJingle = function (jingle) { var self = this; this.raw = 'v=0\r\n' + 'o=- ' + '1923518516' + ' 2 IN IP4 0.0.0.0\r\n' +// FIXME 's=-\r\n' + 't=0 0\r\n'; // http://tools.ietf.org/html/draft-ietf-mmusic-sdp-bundle-negotiation-04#section-8 if ($(jingle).find('>group[xmlns="urn:xmpp:jingle:apps:grouping:0"]').length) { $(jingle).find('>group[xmlns="urn:xmpp:jingle:apps:grouping:0"]').each(function (idx, group) { var contents = $(group).find('>content').map(function (idx, content) { return content.getAttribute('name'); }).get(); if (contents.length > 0) { self.raw += 'a=group:' + (group.getAttribute('semantics') || group.getAttribute('type')) + ' ' + contents.join(' ') + '\r\n'; } }); } else if ($(jingle).find('>group[xmlns="urn:ietf:rfc:5888"]').length) { // temporary namespace, not to be used. to be removed soon. $(jingle).find('>group[xmlns="urn:ietf:rfc:5888"]').each(function (idx, group) { var contents = $(group).find('>content').map(function (idx, content) { return content.getAttribute('name'); }).get(); if (group.getAttribute('type') !== null && contents.length > 0) { self.raw += 'a=group:' + group.getAttribute('type') + ' ' + contents.join(' ') + '\r\n'; } }); } else { // for backward compability, to be removed soon // assume all contents are in the same bundle group, can be improved upon later var bundle = $(jingle).find('>content').filter(function (idx, content) { //elem.c('bundle', {xmlns:'http://estos.de/ns/bundle'}); return $(content).find('>bundle').length > 0; }).map(function (idx, content) { return content.getAttribute('name'); }).get(); if (bundle.length) { this.raw += 'a=group:BUNDLE ' + bundle.join(' ') + '\r\n'; } } this.session = this.raw; jingle.find('>content').each(function () { var m = self.jingle2media($(this)); self.media.push(m); }); // reconstruct msid-semantic -- apparently not necessary /* var msid = SDPUtil.parse_ssrc(this.raw); if (msid.hasOwnProperty('mslabel')) { this.session += "a=msid-semantic: WMS " + msid.mslabel + "\r\n"; } */ this.raw = this.session + this.media.join(''); }; // translate a jingle content element into an an SDP media part SDP.prototype.jingle2media = function (content) { var media = '', desc = content.find('description'), ssrc = desc.attr('ssrc'), self = this, tmp; var sctp = content.find( '>transport>sctpmap[xmlns="urn:xmpp:jingle:transports:dtls-sctp:1"]'); tmp = { media: desc.attr('media') }; tmp.port = '1'; if (content.attr('senders') == 'rejected') { // estos hack to reject an m-line. tmp.port = '0'; } if (content.find('>transport>fingerprint').length || desc.find('encryption').length) { if (sctp.length) tmp.proto = 'DTLS/SCTP'; else tmp.proto = 'RTP/SAVPF'; } else { tmp.proto = 'RTP/AVPF'; } if (!sctp.length) { tmp.fmt = desc.find('payload-type').map( function () { return this.getAttribute('id'); }).get(); media += SDPUtil.build_mline(tmp) + '\r\n'; } else { media += 'm=application 1 DTLS/SCTP ' + sctp.attr('number') + '\r\n'; media += 'a=sctpmap:' + sctp.attr('number') + ' ' + sctp.attr('protocol'); var streamCount = sctp.attr('streams'); if (streamCount) media += ' ' + streamCount + '\r\n'; else media += '\r\n'; } media += 'c=IN IP4 0.0.0.0\r\n'; if (!sctp.length) media += 'a=rtcp:1 IN IP4 0.0.0.0\r\n'; //tmp = content.find('>transport[xmlns="urn:xmpp:jingle:transports:ice-udp:1"]'); tmp = content.find('>bundle>transport[xmlns="urn:xmpp:jingle:transports:ice-udp:1"]'); //console.log('transports: '+content.find('>transport[xmlns="urn:xmpp:jingle:transports:ice-udp:1"]').length); //console.log('bundle.transports: '+content.find('>bundle>transport[xmlns="urn:xmpp:jingle:transports:ice-udp:1"]').length); //console.log("tmp fingerprint: "+tmp.find('>fingerprint').innerHTML); if (tmp.length) { if (tmp.attr('ufrag')) { media += SDPUtil.build_iceufrag(tmp.attr('ufrag')) + '\r\n'; } if (tmp.attr('pwd')) { media += SDPUtil.build_icepwd(tmp.attr('pwd')) + '\r\n'; } tmp.find('>fingerprint').each(function () { // FIXME: check namespace at some point media += 'a=fingerprint:' + this.getAttribute('hash'); media += ' ' + $(this).text(); media += '\r\n'; //console.log("mline "+media); if (this.getAttribute('setup')) { media += 'a=setup:' + this.getAttribute('setup') + '\r\n'; } }); } switch (content.attr('senders')) { case 'initiator': media += 'a=sendonly\r\n'; break; case 'responder': media += 'a=recvonly\r\n'; break; case 'none': media += 'a=inactive\r\n'; break; case 'both': media += 'a=sendrecv\r\n'; break; } media += 'a=mid:' + content.attr('name') + '\r\n'; /*if (content.attr('name') == 'video') { media += 'a=x-google-flag:conference' + '\r\n'; }*/ // // see http://code.google.com/p/libjingle/issues/detail?id=309 -- no spec though // and http://mail.jabber.org/pipermail/jingle/2011-December/001761.html if (desc.find('rtcp-mux').length) { media += 'a=rtcp-mux\r\n'; } if (desc.find('encryption').length) { desc.find('encryption>crypto').each(function () { media += 'a=crypto:' + this.getAttribute('tag'); media += ' ' + this.getAttribute('crypto-suite'); media += ' ' + this.getAttribute('key-params'); if (this.getAttribute('session-params')) { media += ' ' + this.getAttribute('session-params'); } media += '\r\n'; }); } desc.find('payload-type').each(function () { media += SDPUtil.build_rtpmap(this) + '\r\n'; if ($(this).find('>parameter').length) { media += 'a=fmtp:' + this.getAttribute('id') + ' '; media += $(this).find('parameter').map(function () { return (this.getAttribute('name') ? (this.getAttribute('name') + '=') : '') + this.getAttribute('value'); }).get().join('; '); media += '\r\n'; } // xep-0293 media += self.RtcpFbFromJingle($(this), this.getAttribute('id')); }); // xep-0293 media += self.RtcpFbFromJingle(desc, '*'); // xep-0294 tmp = desc.find('>rtp-hdrext[xmlns="urn:xmpp:jingle:apps:rtp:rtp-hdrext:0"]'); tmp.each(function () { media += 'a=extmap:' + this.getAttribute('id') + ' ' + this.getAttribute('uri') + '\r\n'; }); content.find('>bundle>transport[xmlns="urn:xmpp:jingle:transports:ice-udp:1"]>candidate').each(function () { media += SDPUtil.candidateFromJingle(this); }); // XEP-0339 handle ssrc-group attributes tmp = content.find('description>ssrc-group[xmlns="urn:xmpp:jingle:apps:rtp:ssma:0"]').each(function() { var semantics = this.getAttribute('semantics'); var ssrcs = $(this).find('>source').map(function() { return this.getAttribute('ssrc'); }).get(); if (ssrcs.length != 0) { media += 'a=ssrc-group:' + semantics + ' ' + ssrcs.join(' ') + '\r\n'; } }); tmp = content.find('description>source[xmlns="urn:xmpp:jingle:apps:rtp:ssma:0"]'); tmp.each(function () { var ssrc = this.getAttribute('ssrc'); $(this).find('>parameter').each(function () { media += 'a=ssrc:' + ssrc + ' ' + this.getAttribute('name'); if (this.getAttribute('value') && this.getAttribute('value').length) media += ':' + this.getAttribute('value'); media += '\r\n'; }); }); if (tmp.length === 0) { // fallback to proprietary mapping of a=ssrc lines tmp = content.find('description>ssrc[xmlns="http://estos.de/ns/ssrc"]'); if (tmp.length) { media += 'a=ssrc:' + ssrc + ' cname:' + tmp.attr('cname') + '\r\n'; media += 'a=ssrc:' + ssrc + ' msid:' + tmp.attr('msid') + '\r\n'; media += 'a=ssrc:' + ssrc + ' mslabel:' + tmp.attr('mslabel') + '\r\n'; media += 'a=ssrc:' + ssrc + ' label:' + tmp.attr('label') + '\r\n'; } } return media; }; synapse-0.24.0/contrib/jitsimeetbridge/unjingle/strophe.jingle.sdp.util.js000066400000000000000000000334751317335640100267550ustar00rootroot00000000000000/** * Contains utility classes used in SDP class. * */ /** * Class holds a=ssrc lines and media type a=mid * @param ssrc synchronization source identifier number(a=ssrc lines from SDP) * @param type media type eg. "audio" or "video"(a=mid frm SDP) * @constructor */ function ChannelSsrc(ssrc, type) { this.ssrc = ssrc; this.type = type; this.lines = []; } /** * Class holds a=ssrc-group: lines * @param semantics * @param ssrcs * @constructor */ function ChannelSsrcGroup(semantics, ssrcs, line) { this.semantics = semantics; this.ssrcs = ssrcs; } /** * Helper class represents media channel. Is a container for ChannelSsrc, holds channel idx and media type. * @param channelNumber channel idx in SDP media array. * @param mediaType media type(a=mid) * @constructor */ function MediaChannel(channelNumber, mediaType) { /** * SDP channel number * @type {*} */ this.chNumber = channelNumber; /** * Channel media type(a=mid) * @type {*} */ this.mediaType = mediaType; /** * The maps of ssrc numbers to ChannelSsrc objects. */ this.ssrcs = {}; /** * The array of ChannelSsrcGroup objects. * @type {Array} */ this.ssrcGroups = []; } SDPUtil = { iceparams: function (mediadesc, sessiondesc) { var data = null; if (SDPUtil.find_line(mediadesc, 'a=ice-ufrag:', sessiondesc) && SDPUtil.find_line(mediadesc, 'a=ice-pwd:', sessiondesc)) { data = { ufrag: SDPUtil.parse_iceufrag(SDPUtil.find_line(mediadesc, 'a=ice-ufrag:', sessiondesc)), pwd: SDPUtil.parse_icepwd(SDPUtil.find_line(mediadesc, 'a=ice-pwd:', sessiondesc)) }; } return data; }, parse_iceufrag: function (line) { return line.substring(12); }, build_iceufrag: function (frag) { return 'a=ice-ufrag:' + frag; }, parse_icepwd: function (line) { return line.substring(10); }, build_icepwd: function (pwd) { return 'a=ice-pwd:' + pwd; }, parse_mid: function (line) { return line.substring(6); }, parse_mline: function (line) { var parts = line.substring(2).split(' '), data = {}; data.media = parts.shift(); data.port = parts.shift(); data.proto = parts.shift(); if (parts[parts.length - 1] === '') { // trailing whitespace parts.pop(); } data.fmt = parts; return data; }, build_mline: function (mline) { return 'm=' + mline.media + ' ' + mline.port + ' ' + mline.proto + ' ' + mline.fmt.join(' '); }, parse_rtpmap: function (line) { var parts = line.substring(9).split(' '), data = {}; data.id = parts.shift(); parts = parts[0].split('/'); data.name = parts.shift(); data.clockrate = parts.shift(); data.channels = parts.length ? parts.shift() : '1'; return data; }, /** * Parses SDP line "a=sctpmap:..." and extracts SCTP port from it. * @param line eg. "a=sctpmap:5000 webrtc-datachannel" * @returns [SCTP port number, protocol, streams] */ parse_sctpmap: function (line) { var parts = line.substring(10).split(' '); var sctpPort = parts[0]; var protocol = parts[1]; // Stream count is optional var streamCount = parts.length > 2 ? parts[2] : null; return [sctpPort, protocol, streamCount];// SCTP port }, build_rtpmap: function (el) { var line = 'a=rtpmap:' + el.getAttribute('id') + ' ' + el.getAttribute('name') + '/' + el.getAttribute('clockrate'); if (el.getAttribute('channels') && el.getAttribute('channels') != '1') { line += '/' + el.getAttribute('channels'); } return line; }, parse_crypto: function (line) { var parts = line.substring(9).split(' '), data = {}; data.tag = parts.shift(); data['crypto-suite'] = parts.shift(); data['key-params'] = parts.shift(); if (parts.length) { data['session-params'] = parts.join(' '); } return data; }, parse_fingerprint: function (line) { // RFC 4572 var parts = line.substring(14).split(' '), data = {}; data.hash = parts.shift(); data.fingerprint = parts.shift(); // TODO assert that fingerprint satisfies 2UHEX *(":" 2UHEX) ? return data; }, parse_fmtp: function (line) { var parts = line.split(' '), i, key, value, data = []; parts.shift(); parts = parts.join(' ').split(';'); for (i = 0; i < parts.length; i++) { key = parts[i].split('=')[0]; while (key.length && key[0] == ' ') { key = key.substring(1); } value = parts[i].split('=')[1]; if (key && value) { data.push({name: key, value: value}); } else if (key) { // rfc 4733 (DTMF) style stuff data.push({name: '', value: key}); } } return data; }, parse_icecandidate: function (line) { var candidate = {}, elems = line.split(' '); candidate.foundation = elems[0].substring(12); candidate.component = elems[1]; candidate.protocol = elems[2].toLowerCase(); candidate.priority = elems[3]; candidate.ip = elems[4]; candidate.port = elems[5]; // elems[6] => "typ" candidate.type = elems[7]; candidate.generation = 0; // default value, may be overwritten below for (var i = 8; i < elems.length; i += 2) { switch (elems[i]) { case 'raddr': candidate['rel-addr'] = elems[i + 1]; break; case 'rport': candidate['rel-port'] = elems[i + 1]; break; case 'generation': candidate.generation = elems[i + 1]; break; case 'tcptype': candidate.tcptype = elems[i + 1]; break; default: // TODO console.log('parse_icecandidate not translating "' + elems[i] + '" = "' + elems[i + 1] + '"'); } } candidate.network = '1'; candidate.id = Math.random().toString(36).substr(2, 10); // not applicable to SDP -- FIXME: should be unique, not just random return candidate; }, build_icecandidate: function (cand) { var line = ['a=candidate:' + cand.foundation, cand.component, cand.protocol, cand.priority, cand.ip, cand.port, 'typ', cand.type].join(' '); line += ' '; switch (cand.type) { case 'srflx': case 'prflx': case 'relay': if (cand.hasOwnAttribute('rel-addr') && cand.hasOwnAttribute('rel-port')) { line += 'raddr'; line += ' '; line += cand['rel-addr']; line += ' '; line += 'rport'; line += ' '; line += cand['rel-port']; line += ' '; } break; } if (cand.hasOwnAttribute('tcptype')) { line += 'tcptype'; line += ' '; line += cand.tcptype; line += ' '; } line += 'generation'; line += ' '; line += cand.hasOwnAttribute('generation') ? cand.generation : '0'; return line; }, parse_ssrc: function (desc) { // proprietary mapping of a=ssrc lines // TODO: see "Jingle RTP Source Description" by Juberti and P. Thatcher on google docs // and parse according to that var lines = desc.split('\r\n'), data = {}; for (var i = 0; i < lines.length; i++) { if (lines[i].substring(0, 7) == 'a=ssrc:') { var idx = lines[i].indexOf(' '); data[lines[i].substr(idx + 1).split(':', 2)[0]] = lines[i].substr(idx + 1).split(':', 2)[1]; } } return data; }, parse_rtcpfb: function (line) { var parts = line.substr(10).split(' '); var data = {}; data.pt = parts.shift(); data.type = parts.shift(); data.params = parts; return data; }, parse_extmap: function (line) { var parts = line.substr(9).split(' '); var data = {}; data.value = parts.shift(); if (data.value.indexOf('/') != -1) { data.direction = data.value.substr(data.value.indexOf('/') + 1); data.value = data.value.substr(0, data.value.indexOf('/')); } else { data.direction = 'both'; } data.uri = parts.shift(); data.params = parts; return data; }, find_line: function (haystack, needle, sessionpart) { var lines = haystack.split('\r\n'); for (var i = 0; i < lines.length; i++) { if (lines[i].substring(0, needle.length) == needle) { return lines[i]; } } if (!sessionpart) { return false; } // search session part lines = sessionpart.split('\r\n'); for (var j = 0; j < lines.length; j++) { if (lines[j].substring(0, needle.length) == needle) { return lines[j]; } } return false; }, find_lines: function (haystack, needle, sessionpart) { var lines = haystack.split('\r\n'), needles = []; for (var i = 0; i < lines.length; i++) { if (lines[i].substring(0, needle.length) == needle) needles.push(lines[i]); } if (needles.length || !sessionpart) { return needles; } // search session part lines = sessionpart.split('\r\n'); for (var j = 0; j < lines.length; j++) { if (lines[j].substring(0, needle.length) == needle) { needles.push(lines[j]); } } return needles; }, candidateToJingle: function (line) { // a=candidate:2979166662 1 udp 2113937151 192.168.2.100 57698 typ host generation 0 // if (line.indexOf('candidate:') === 0) { line = 'a=' + line; } else if (line.substring(0, 12) != 'a=candidate:') { console.log('parseCandidate called with a line that is not a candidate line'); console.log(line); return null; } if (line.substring(line.length - 2) == '\r\n') // chomp it line = line.substring(0, line.length - 2); var candidate = {}, elems = line.split(' '), i; if (elems[6] != 'typ') { console.log('did not find typ in the right place'); console.log(line); return null; } candidate.foundation = elems[0].substring(12); candidate.component = elems[1]; candidate.protocol = elems[2].toLowerCase(); candidate.priority = elems[3]; candidate.ip = elems[4]; candidate.port = elems[5]; // elems[6] => "typ" candidate.type = elems[7]; candidate.generation = '0'; // default, may be overwritten below for (i = 8; i < elems.length; i += 2) { switch (elems[i]) { case 'raddr': candidate['rel-addr'] = elems[i + 1]; break; case 'rport': candidate['rel-port'] = elems[i + 1]; break; case 'generation': candidate.generation = elems[i + 1]; break; case 'tcptype': candidate.tcptype = elems[i + 1]; break; default: // TODO console.log('not translating "' + elems[i] + '" = "' + elems[i + 1] + '"'); } } candidate.network = '1'; candidate.id = Math.random().toString(36).substr(2, 10); // not applicable to SDP -- FIXME: should be unique, not just random return candidate; }, candidateFromJingle: function (cand) { var line = 'a=candidate:'; line += cand.getAttribute('foundation'); line += ' '; line += cand.getAttribute('component'); line += ' '; line += cand.getAttribute('protocol'); //.toUpperCase(); // chrome M23 doesn't like this line += ' '; line += cand.getAttribute('priority'); line += ' '; line += cand.getAttribute('ip'); line += ' '; line += cand.getAttribute('port'); line += ' '; line += 'typ'; line += ' ' + cand.getAttribute('type'); line += ' '; switch (cand.getAttribute('type')) { case 'srflx': case 'prflx': case 'relay': if (cand.getAttribute('rel-addr') && cand.getAttribute('rel-port')) { line += 'raddr'; line += ' '; line += cand.getAttribute('rel-addr'); line += ' '; line += 'rport'; line += ' '; line += cand.getAttribute('rel-port'); line += ' '; } break; } if (cand.getAttribute('protocol').toLowerCase() == 'tcp') { line += 'tcptype'; line += ' '; line += cand.getAttribute('tcptype'); line += ' '; } line += 'generation'; line += ' '; line += cand.getAttribute('generation') || '0'; return line + '\r\n'; } }; exports.SDPUtil = SDPUtil; synapse-0.24.0/contrib/jitsimeetbridge/unjingle/strophe/000077500000000000000000000000001317335640100233735ustar00rootroot00000000000000synapse-0.24.0/contrib/jitsimeetbridge/unjingle/strophe/XMLHttpRequest.js000066400000000000000000000131121317335640100266000ustar00rootroot00000000000000/** * Wrapper for built-in http.js to emulate the browser XMLHttpRequest object. * * This can be used with JS designed for browsers to improve reuse of code and * allow the use of existing libraries. * * Usage: include("XMLHttpRequest.js") and use XMLHttpRequest per W3C specs. * * @todo SSL Support * @author Dan DeFelippi * @license MIT */ var Url = require("url") ,sys = require("util"); exports.XMLHttpRequest = function() { /** * Private variables */ var self = this; var http = require('http'); var https = require('https'); // Holds http.js objects var client; var request; var response; // Request settings var settings = {}; // Set some default headers var defaultHeaders = { "User-Agent": "node.js", "Accept": "*/*", }; var headers = defaultHeaders; /** * Constants */ this.UNSENT = 0; this.OPENED = 1; this.HEADERS_RECEIVED = 2; this.LOADING = 3; this.DONE = 4; /** * Public vars */ // Current state this.readyState = this.UNSENT; // default ready state change handler in case one is not set or is set late this.onreadystatechange = function() {}; // Result & response this.responseText = ""; this.responseXML = ""; this.status = null; this.statusText = null; /** * Open the connection. Currently supports local server requests. * * @param string method Connection method (eg GET, POST) * @param string url URL for the connection. * @param boolean async Asynchronous connection. Default is true. * @param string user Username for basic authentication (optional) * @param string password Password for basic authentication (optional) */ this.open = function(method, url, async, user, password) { settings = { "method": method, "url": url, "async": async || null, "user": user || null, "password": password || null }; this.abort(); setState(this.OPENED); }; /** * Sets a header for the request. * * @param string header Header name * @param string value Header value */ this.setRequestHeader = function(header, value) { headers[header] = value; }; /** * Gets a header from the server response. * * @param string header Name of header to get. * @return string Text of the header or null if it doesn't exist. */ this.getResponseHeader = function(header) { if (this.readyState > this.OPENED && response.headers[header]) { return header + ": " + response.headers[header]; } return null; }; /** * Gets all the response headers. * * @return string */ this.getAllResponseHeaders = function() { if (this.readyState < this.HEADERS_RECEIVED) { throw "INVALID_STATE_ERR: Headers have not been received."; } var result = ""; for (var i in response.headers) { result += i + ": " + response.headers[i] + "\r\n"; } return result.substr(0, result.length - 2); }; /** * Sends the request to the server. * * @param string data Optional data to send as request body. */ this.send = function(data) { if (this.readyState != this.OPENED) { throw "INVALID_STATE_ERR: connection must be opened before send() is called"; } var ssl = false; var url = Url.parse(settings.url); // Determine the server switch (url.protocol) { case 'https:': ssl = true; // SSL & non-SSL both need host, no break here. case 'http:': var host = url.hostname; break; case undefined: case '': var host = "localhost"; break; default: throw "Protocol not supported."; } // Default to port 80. If accessing localhost on another port be sure // to use http://localhost:port/path var port = url.port || (ssl ? 443 : 80); // Add query string if one is used var uri = url.pathname + (url.search ? url.search : ''); // Set the Host header or the server may reject the request this.setRequestHeader("Host", host); // Set content length header if (settings.method == "GET" || settings.method == "HEAD") { data = null; } else if (data) { this.setRequestHeader("Content-Length", Buffer.byteLength(data)); if (!headers["Content-Type"]) { this.setRequestHeader("Content-Type", "text/plain;charset=UTF-8"); } } // Use the proper protocol var doRequest = ssl ? https.request : http.request; var options = { host: host, port: port, path: uri, method: settings.method, headers: headers, agent: false }; var req = doRequest(options, function(res) { response = res; response.setEncoding("utf8"); setState(self.HEADERS_RECEIVED); self.status = response.statusCode; response.on('data', function(chunk) { // Make sure there's some data if (chunk) { self.responseText += chunk; } setState(self.LOADING); }); response.on('end', function() { setState(self.DONE); }); response.on('error', function() { self.handleError(error); }); }).on('error', function(error) { self.handleError(error); }); req.setHeader("Connection", "Close"); // Node 0.4 and later won't accept empty data. Make sure it's needed. if (data) { req.write(data); } req.end(); }; this.handleError = function(error) { this.status = 503; this.statusText = error; this.responseText = error.stack; setState(this.DONE); }; /** * Aborts a request. */ this.abort = function() { headers = defaultHeaders; this.readyState = this.UNSENT; this.responseText = ""; this.responseXML = ""; }; /** * Changes readyState and calls onreadystatechange. * * @param int state New state */ var setState = function(state) { self.readyState = state; self.onreadystatechange(); } }; synapse-0.24.0/contrib/jitsimeetbridge/unjingle/strophe/base64.js000066400000000000000000000050441317335640100250200ustar00rootroot00000000000000// This code was written by Tyler Akins and has been placed in the // public domain. It would be nice if you left this header intact. // Base64 code from Tyler Akins -- http://rumkin.com var Base64 = (function () { var keyStr = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/="; var obj = { /** * Encodes a string in base64 * @param {String} input The string to encode in base64. */ encode: function (input) { var output = ""; var chr1, chr2, chr3; var enc1, enc2, enc3, enc4; var i = 0; do { chr1 = input.charCodeAt(i++); chr2 = input.charCodeAt(i++); chr3 = input.charCodeAt(i++); enc1 = chr1 >> 2; enc2 = ((chr1 & 3) << 4) | (chr2 >> 4); enc3 = ((chr2 & 15) << 2) | (chr3 >> 6); enc4 = chr3 & 63; if (isNaN(chr2)) { enc3 = enc4 = 64; } else if (isNaN(chr3)) { enc4 = 64; } output = output + keyStr.charAt(enc1) + keyStr.charAt(enc2) + keyStr.charAt(enc3) + keyStr.charAt(enc4); } while (i < input.length); return output; }, /** * Decodes a base64 string. * @param {String} input The string to decode. */ decode: function (input) { var output = ""; var chr1, chr2, chr3; var enc1, enc2, enc3, enc4; var i = 0; // remove all characters that are not A-Z, a-z, 0-9, +, /, or = input = input.replace(/[^A-Za-z0-9\+\/\=]/g, ''); do { enc1 = keyStr.indexOf(input.charAt(i++)); enc2 = keyStr.indexOf(input.charAt(i++)); enc3 = keyStr.indexOf(input.charAt(i++)); enc4 = keyStr.indexOf(input.charAt(i++)); chr1 = (enc1 << 2) | (enc2 >> 4); chr2 = ((enc2 & 15) << 4) | (enc3 >> 2); chr3 = ((enc3 & 3) << 6) | enc4; output = output + String.fromCharCode(chr1); if (enc3 != 64) { output = output + String.fromCharCode(chr2); } if (enc4 != 64) { output = output + String.fromCharCode(chr3); } } while (i < input.length); return output; } }; return obj; })(); // Nodify exports.Base64 = Base64; synapse-0.24.0/contrib/jitsimeetbridge/unjingle/strophe/md5.js000066400000000000000000000241641317335640100244250ustar00rootroot00000000000000/* * A JavaScript implementation of the RSA Data Security, Inc. MD5 Message * Digest Algorithm, as defined in RFC 1321. * Version 2.1 Copyright (C) Paul Johnston 1999 - 2002. * Other contributors: Greg Holt, Andrew Kepert, Ydnar, Lostinet * Distributed under the BSD License * See http://pajhome.org.uk/crypt/md5 for more info. */ var MD5 = (function () { /* * Configurable variables. You may need to tweak these to be compatible with * the server-side, but the defaults work in most cases. */ var hexcase = 0; /* hex output format. 0 - lowercase; 1 - uppercase */ var b64pad = ""; /* base-64 pad character. "=" for strict RFC compliance */ var chrsz = 8; /* bits per input character. 8 - ASCII; 16 - Unicode */ /* * Add integers, wrapping at 2^32. This uses 16-bit operations internally * to work around bugs in some JS interpreters. */ var safe_add = function (x, y) { var lsw = (x & 0xFFFF) + (y & 0xFFFF); var msw = (x >> 16) + (y >> 16) + (lsw >> 16); return (msw << 16) | (lsw & 0xFFFF); }; /* * Bitwise rotate a 32-bit number to the left. */ var bit_rol = function (num, cnt) { return (num << cnt) | (num >>> (32 - cnt)); }; /* * Convert a string to an array of little-endian words * If chrsz is ASCII, characters >255 have their hi-byte silently ignored. */ var str2binl = function (str) { var bin = []; var mask = (1 << chrsz) - 1; for(var i = 0; i < str.length * chrsz; i += chrsz) { bin[i>>5] |= (str.charCodeAt(i / chrsz) & mask) << (i%32); } return bin; }; /* * Convert an array of little-endian words to a string */ var binl2str = function (bin) { var str = ""; var mask = (1 << chrsz) - 1; for(var i = 0; i < bin.length * 32; i += chrsz) { str += String.fromCharCode((bin[i>>5] >>> (i % 32)) & mask); } return str; }; /* * Convert an array of little-endian words to a hex string. */ var binl2hex = function (binarray) { var hex_tab = hexcase ? "0123456789ABCDEF" : "0123456789abcdef"; var str = ""; for(var i = 0; i < binarray.length * 4; i++) { str += hex_tab.charAt((binarray[i>>2] >> ((i%4)*8+4)) & 0xF) + hex_tab.charAt((binarray[i>>2] >> ((i%4)*8 )) & 0xF); } return str; }; /* * Convert an array of little-endian words to a base-64 string */ var binl2b64 = function (binarray) { var tab = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; var str = ""; var triplet, j; for(var i = 0; i < binarray.length * 4; i += 3) { triplet = (((binarray[i >> 2] >> 8 * ( i %4)) & 0xFF) << 16) | (((binarray[i+1 >> 2] >> 8 * ((i+1)%4)) & 0xFF) << 8 ) | ((binarray[i+2 >> 2] >> 8 * ((i+2)%4)) & 0xFF); for(j = 0; j < 4; j++) { if(i * 8 + j * 6 > binarray.length * 32) { str += b64pad; } else { str += tab.charAt((triplet >> 6*(3-j)) & 0x3F); } } } return str; }; /* * These functions implement the four basic operations the algorithm uses. */ var md5_cmn = function (q, a, b, x, s, t) { return safe_add(bit_rol(safe_add(safe_add(a, q),safe_add(x, t)), s),b); }; var md5_ff = function (a, b, c, d, x, s, t) { return md5_cmn((b & c) | ((~b) & d), a, b, x, s, t); }; var md5_gg = function (a, b, c, d, x, s, t) { return md5_cmn((b & d) | (c & (~d)), a, b, x, s, t); }; var md5_hh = function (a, b, c, d, x, s, t) { return md5_cmn(b ^ c ^ d, a, b, x, s, t); }; var md5_ii = function (a, b, c, d, x, s, t) { return md5_cmn(c ^ (b | (~d)), a, b, x, s, t); }; /* * Calculate the MD5 of an array of little-endian words, and a bit length */ var core_md5 = function (x, len) { /* append padding */ x[len >> 5] |= 0x80 << ((len) % 32); x[(((len + 64) >>> 9) << 4) + 14] = len; var a = 1732584193; var b = -271733879; var c = -1732584194; var d = 271733878; var olda, oldb, oldc, oldd; for (var i = 0; i < x.length; i += 16) { olda = a; oldb = b; oldc = c; oldd = d; a = md5_ff(a, b, c, d, x[i+ 0], 7 , -680876936); d = md5_ff(d, a, b, c, x[i+ 1], 12, -389564586); c = md5_ff(c, d, a, b, x[i+ 2], 17, 606105819); b = md5_ff(b, c, d, a, x[i+ 3], 22, -1044525330); a = md5_ff(a, b, c, d, x[i+ 4], 7 , -176418897); d = md5_ff(d, a, b, c, x[i+ 5], 12, 1200080426); c = md5_ff(c, d, a, b, x[i+ 6], 17, -1473231341); b = md5_ff(b, c, d, a, x[i+ 7], 22, -45705983); a = md5_ff(a, b, c, d, x[i+ 8], 7 , 1770035416); d = md5_ff(d, a, b, c, x[i+ 9], 12, -1958414417); c = md5_ff(c, d, a, b, x[i+10], 17, -42063); b = md5_ff(b, c, d, a, x[i+11], 22, -1990404162); a = md5_ff(a, b, c, d, x[i+12], 7 , 1804603682); d = md5_ff(d, a, b, c, x[i+13], 12, -40341101); c = md5_ff(c, d, a, b, x[i+14], 17, -1502002290); b = md5_ff(b, c, d, a, x[i+15], 22, 1236535329); a = md5_gg(a, b, c, d, x[i+ 1], 5 , -165796510); d = md5_gg(d, a, b, c, x[i+ 6], 9 , -1069501632); c = md5_gg(c, d, a, b, x[i+11], 14, 643717713); b = md5_gg(b, c, d, a, x[i+ 0], 20, -373897302); a = md5_gg(a, b, c, d, x[i+ 5], 5 , -701558691); d = md5_gg(d, a, b, c, x[i+10], 9 , 38016083); c = md5_gg(c, d, a, b, x[i+15], 14, -660478335); b = md5_gg(b, c, d, a, x[i+ 4], 20, -405537848); a = md5_gg(a, b, c, d, x[i+ 9], 5 , 568446438); d = md5_gg(d, a, b, c, x[i+14], 9 , -1019803690); c = md5_gg(c, d, a, b, x[i+ 3], 14, -187363961); b = md5_gg(b, c, d, a, x[i+ 8], 20, 1163531501); a = md5_gg(a, b, c, d, x[i+13], 5 , -1444681467); d = md5_gg(d, a, b, c, x[i+ 2], 9 , -51403784); c = md5_gg(c, d, a, b, x[i+ 7], 14, 1735328473); b = md5_gg(b, c, d, a, x[i+12], 20, -1926607734); a = md5_hh(a, b, c, d, x[i+ 5], 4 , -378558); d = md5_hh(d, a, b, c, x[i+ 8], 11, -2022574463); c = md5_hh(c, d, a, b, x[i+11], 16, 1839030562); b = md5_hh(b, c, d, a, x[i+14], 23, -35309556); a = md5_hh(a, b, c, d, x[i+ 1], 4 , -1530992060); d = md5_hh(d, a, b, c, x[i+ 4], 11, 1272893353); c = md5_hh(c, d, a, b, x[i+ 7], 16, -155497632); b = md5_hh(b, c, d, a, x[i+10], 23, -1094730640); a = md5_hh(a, b, c, d, x[i+13], 4 , 681279174); d = md5_hh(d, a, b, c, x[i+ 0], 11, -358537222); c = md5_hh(c, d, a, b, x[i+ 3], 16, -722521979); b = md5_hh(b, c, d, a, x[i+ 6], 23, 76029189); a = md5_hh(a, b, c, d, x[i+ 9], 4 , -640364487); d = md5_hh(d, a, b, c, x[i+12], 11, -421815835); c = md5_hh(c, d, a, b, x[i+15], 16, 530742520); b = md5_hh(b, c, d, a, x[i+ 2], 23, -995338651); a = md5_ii(a, b, c, d, x[i+ 0], 6 , -198630844); d = md5_ii(d, a, b, c, x[i+ 7], 10, 1126891415); c = md5_ii(c, d, a, b, x[i+14], 15, -1416354905); b = md5_ii(b, c, d, a, x[i+ 5], 21, -57434055); a = md5_ii(a, b, c, d, x[i+12], 6 , 1700485571); d = md5_ii(d, a, b, c, x[i+ 3], 10, -1894986606); c = md5_ii(c, d, a, b, x[i+10], 15, -1051523); b = md5_ii(b, c, d, a, x[i+ 1], 21, -2054922799); a = md5_ii(a, b, c, d, x[i+ 8], 6 , 1873313359); d = md5_ii(d, a, b, c, x[i+15], 10, -30611744); c = md5_ii(c, d, a, b, x[i+ 6], 15, -1560198380); b = md5_ii(b, c, d, a, x[i+13], 21, 1309151649); a = md5_ii(a, b, c, d, x[i+ 4], 6 , -145523070); d = md5_ii(d, a, b, c, x[i+11], 10, -1120210379); c = md5_ii(c, d, a, b, x[i+ 2], 15, 718787259); b = md5_ii(b, c, d, a, x[i+ 9], 21, -343485551); a = safe_add(a, olda); b = safe_add(b, oldb); c = safe_add(c, oldc); d = safe_add(d, oldd); } return [a, b, c, d]; }; /* * Calculate the HMAC-MD5, of a key and some data */ var core_hmac_md5 = function (key, data) { var bkey = str2binl(key); if(bkey.length > 16) { bkey = core_md5(bkey, key.length * chrsz); } var ipad = new Array(16), opad = new Array(16); for(var i = 0; i < 16; i++) { ipad[i] = bkey[i] ^ 0x36363636; opad[i] = bkey[i] ^ 0x5C5C5C5C; } var hash = core_md5(ipad.concat(str2binl(data)), 512 + data.length * chrsz); return core_md5(opad.concat(hash), 512 + 128); }; var obj = { /* * These are the functions you'll usually want to call. * They take string arguments and return either hex or base-64 encoded * strings. */ hexdigest: function (s) { return binl2hex(core_md5(str2binl(s), s.length * chrsz)); }, b64digest: function (s) { return binl2b64(core_md5(str2binl(s), s.length * chrsz)); }, hash: function (s) { return binl2str(core_md5(str2binl(s), s.length * chrsz)); }, hmac_hexdigest: function (key, data) { return binl2hex(core_hmac_md5(key, data)); }, hmac_b64digest: function (key, data) { return binl2b64(core_hmac_md5(key, data)); }, hmac_hash: function (key, data) { return binl2str(core_hmac_md5(key, data)); }, /* * Perform a simple self-test to see if the VM is working */ test: function () { return MD5.hexdigest("abc") === "900150983cd24fb0d6963f7d28e17f72"; } }; return obj; })(); // Nodify exports.MD5 = MD5; synapse-0.24.0/contrib/jitsimeetbridge/unjingle/strophe/strophe.js000066400000000000000000003206151317335640100254240ustar00rootroot00000000000000/* This program is distributed under the terms of the MIT license. Please see the LICENSE file for details. Copyright 2006-2008, OGG, LLC */ /* jslint configuration: */ /*global document, window, setTimeout, clearTimeout, console, XMLHttpRequest, ActiveXObject, Base64, MD5, Strophe, $build, $msg, $iq, $pres */ /** File: strophe.js * A JavaScript library for XMPP BOSH. * * This is the JavaScript version of the Strophe library. Since JavaScript * has no facilities for persistent TCP connections, this library uses * Bidirectional-streams Over Synchronous HTTP (BOSH) to emulate * a persistent, stateful, two-way connection to an XMPP server. More * information on BOSH can be found in XEP 124. */ /** PrivateFunction: Function.prototype.bind * Bind a function to an instance. * * This Function object extension method creates a bound method similar * to those in Python. This means that the 'this' object will point * to the instance you want. See * MDC's bind() documentation and * Bound Functions and Function Imports in JavaScript * for a complete explanation. * * This extension already exists in some browsers (namely, Firefox 3), but * we provide it to support those that don't. * * Parameters: * (Object) obj - The object that will become 'this' in the bound function. * (Object) argN - An option argument that will be prepended to the * arguments given for the function call * * Returns: * The bound function. */ /* Make it work on node.js: Nodify * * Steps: * 1. Create the global objects: window, document, Base64, MD5 and XMLHttpRequest * 2. Use the node-XMLHttpRequest module. * 3. Use jsdom for the document object - since it supports DOM functions. * 4. Replace all calls to childNodes with _childNodes (since the former doesn't * seem to work on jsdom). * 5. While getting the response from XMLHttpRequest, manually convert the text * data to XML. * 6. All calls to nodeName should replaced by nodeName.toLowerCase() since jsdom * seems to always convert node names to upper case. * */ var XMLHttpRequest = require('./XMLHttpRequest.js').XMLHttpRequest; var Base64 = require('./base64.js').Base64; var MD5 = require('./md5.js').MD5; var jsdom = require("jsdom").jsdom; document = jsdom(""), window = { XMLHttpRequest: XMLHttpRequest, Base64: Base64, MD5: MD5 }; exports.Strophe = window; if (!Function.prototype.bind) { Function.prototype.bind = function (obj /*, arg1, arg2, ... */) { var func = this; var _slice = Array.prototype.slice; var _concat = Array.prototype.concat; var _args = _slice.call(arguments, 1); return function () { return func.apply(obj ? obj : this, _concat.call(_args, _slice.call(arguments, 0))); }; }; } /** PrivateFunction: Array.prototype.indexOf * Return the index of an object in an array. * * This function is not supplied by some JavaScript implementations, so * we provide it if it is missing. This code is from: * http://developer.mozilla.org/En/Core_JavaScript_1.5_Reference:Objects:Array:indexOf * * Parameters: * (Object) elt - The object to look for. * (Integer) from - The index from which to start looking. (optional). * * Returns: * The index of elt in the array or -1 if not found. */ if (!Array.prototype.indexOf) { Array.prototype.indexOf = function(elt /*, from*/) { var len = this.length; var from = Number(arguments[1]) || 0; from = (from < 0) ? Math.ceil(from) : Math.floor(from); if (from < 0) { from += len; } for (; from < len; from++) { if (from in this && this[from] === elt) { return from; } } return -1; }; } /* All of the Strophe globals are defined in this special function below so * that references to the globals become closures. This will ensure that * on page reload, these references will still be available to callbacks * that are still executing. */ (function (callback) { var Strophe; /** Function: $build * Create a Strophe.Builder. * This is an alias for 'new Strophe.Builder(name, attrs)'. * * Parameters: * (String) name - The root element name. * (Object) attrs - The attributes for the root element in object notation. * * Returns: * A new Strophe.Builder object. */ function $build(name, attrs) { return new Strophe.Builder(name, attrs); } /** Function: $msg * Create a Strophe.Builder with a element as the root. * * Parmaeters: * (Object) attrs - The element attributes in object notation. * * Returns: * A new Strophe.Builder object. */ function $msg(attrs) { return new Strophe.Builder("message", attrs); } /** Function: $iq * Create a Strophe.Builder with an element as the root. * * Parameters: * (Object) attrs - The element attributes in object notation. * * Returns: * A new Strophe.Builder object. */ function $iq(attrs) { return new Strophe.Builder("iq", attrs); } /** Function: $pres * Create a Strophe.Builder with a element as the root. * * Parameters: * (Object) attrs - The element attributes in object notation. * * Returns: * A new Strophe.Builder object. */ function $pres(attrs) { return new Strophe.Builder("presence", attrs); } /** Class: Strophe * An object container for all Strophe library functions. * * This class is just a container for all the objects and constants * used in the library. It is not meant to be instantiated, but to * provide a namespace for library objects, constants, and functions. */ Strophe = { /** Constant: VERSION * The version of the Strophe library. Unreleased builds will have * a version of head-HASH where HASH is a partial revision. */ VERSION: "@VERSION@", /** Constants: XMPP Namespace Constants * Common namespace constants from the XMPP RFCs and XEPs. * * NS.HTTPBIND - HTTP BIND namespace from XEP 124. * NS.BOSH - BOSH namespace from XEP 206. * NS.CLIENT - Main XMPP client namespace. * NS.AUTH - Legacy authentication namespace. * NS.ROSTER - Roster operations namespace. * NS.PROFILE - Profile namespace. * NS.DISCO_INFO - Service discovery info namespace from XEP 30. * NS.DISCO_ITEMS - Service discovery items namespace from XEP 30. * NS.MUC - Multi-User Chat namespace from XEP 45. * NS.SASL - XMPP SASL namespace from RFC 3920. * NS.STREAM - XMPP Streams namespace from RFC 3920. * NS.BIND - XMPP Binding namespace from RFC 3920. * NS.SESSION - XMPP Session namespace from RFC 3920. */ NS: { HTTPBIND: "http://jabber.org/protocol/httpbind", BOSH: "urn:xmpp:xbosh", CLIENT: "jabber:client", AUTH: "jabber:iq:auth", ROSTER: "jabber:iq:roster", PROFILE: "jabber:iq:profile", DISCO_INFO: "http://jabber.org/protocol/disco#info", DISCO_ITEMS: "http://jabber.org/protocol/disco#items", MUC: "http://jabber.org/protocol/muc", SASL: "urn:ietf:params:xml:ns:xmpp-sasl", STREAM: "http://etherx.jabber.org/streams", BIND: "urn:ietf:params:xml:ns:xmpp-bind", SESSION: "urn:ietf:params:xml:ns:xmpp-session", VERSION: "jabber:iq:version", STANZAS: "urn:ietf:params:xml:ns:xmpp-stanzas" }, /** Function: addNamespace * This function is used to extend the current namespaces in * Strophe.NS. It takes a key and a value with the key being the * name of the new namespace, with its actual value. * For example: * Strophe.addNamespace('PUBSUB', "http://jabber.org/protocol/pubsub"); * * Parameters: * (String) name - The name under which the namespace will be * referenced under Strophe.NS * (String) value - The actual namespace. */ addNamespace: function (name, value) { Strophe.NS[name] = value; }, /** Constants: Connection Status Constants * Connection status constants for use by the connection handler * callback. * * Status.ERROR - An error has occurred * Status.CONNECTING - The connection is currently being made * Status.CONNFAIL - The connection attempt failed * Status.AUTHENTICATING - The connection is authenticating * Status.AUTHFAIL - The authentication attempt failed * Status.CONNECTED - The connection has succeeded * Status.DISCONNECTED - The connection has been terminated * Status.DISCONNECTING - The connection is currently being terminated * Status.ATTACHED - The connection has been attached */ Status: { ERROR: 0, CONNECTING: 1, CONNFAIL: 2, AUTHENTICATING: 3, AUTHFAIL: 4, CONNECTED: 5, DISCONNECTED: 6, DISCONNECTING: 7, ATTACHED: 8 }, /** Constants: Log Level Constants * Logging level indicators. * * LogLevel.DEBUG - Debug output * LogLevel.INFO - Informational output * LogLevel.WARN - Warnings * LogLevel.ERROR - Errors * LogLevel.FATAL - Fatal errors */ LogLevel: { DEBUG: 0, INFO: 1, WARN: 2, ERROR: 3, FATAL: 4 }, /** PrivateConstants: DOM Element Type Constants * DOM element types. * * ElementType.NORMAL - Normal element. * ElementType.TEXT - Text data element. */ ElementType: { NORMAL: 1, TEXT: 3 }, /** PrivateConstants: Timeout Values * Timeout values for error states. These values are in seconds. * These should not be changed unless you know exactly what you are * doing. * * TIMEOUT - Timeout multiplier. A waiting request will be considered * failed after Math.floor(TIMEOUT * wait) seconds have elapsed. * This defaults to 1.1, and with default wait, 66 seconds. * SECONDARY_TIMEOUT - Secondary timeout multiplier. In cases where * Strophe can detect early failure, it will consider the request * failed if it doesn't return after * Math.floor(SECONDARY_TIMEOUT * wait) seconds have elapsed. * This defaults to 0.1, and with default wait, 6 seconds. */ TIMEOUT: 1.1, SECONDARY_TIMEOUT: 0.1, /** Function: forEachChild * Map a function over some or all child elements of a given element. * * This is a small convenience function for mapping a function over * some or all of the children of an element. If elemName is null, all * children will be passed to the function, otherwise only children * whose tag names match elemName will be passed. * * Parameters: * (XMLElement) elem - The element to operate on. * (String) elemName - The child element tag name filter. * (Function) func - The function to apply to each child. This * function should take a single argument, a DOM element. */ forEachChild: function (elem, elemName, func) { var i, childNode; for (i = 0; i < elem._childNodes.length; i++) { childNode = elem._childNodes[i]; if (childNode.nodeType == Strophe.ElementType.NORMAL && (!elemName || this.isTagEqual(childNode, elemName))) { func(childNode); } } }, /** Function: isTagEqual * Compare an element's tag name with a string. * * This function is case insensitive. * * Parameters: * (XMLElement) el - A DOM element. * (String) name - The element name. * * Returns: * true if the element's tag name matches _el_, and false * otherwise. */ isTagEqual: function (el, name) { return el.tagName.toLowerCase() == name.toLowerCase(); }, /** PrivateVariable: _xmlGenerator * _Private_ variable that caches a DOM document to * generate elements. */ _xmlGenerator: null, /** PrivateFunction: _makeGenerator * _Private_ function that creates a dummy XML DOM document to serve as * an element and text node generator. */ _makeGenerator: function () { var doc; if (window.ActiveXObject) { doc = this._getIEXmlDom(); doc.appendChild(doc.createElement('strophe')); } else { doc = document.implementation .createDocument('jabber:client', 'strophe', null); } return doc; }, /** Function: xmlGenerator * Get the DOM document to generate elements. * * Returns: * The currently used DOM document. */ xmlGenerator: function () { if (!Strophe._xmlGenerator) { Strophe._xmlGenerator = Strophe._makeGenerator(); } return Strophe._xmlGenerator; }, /** PrivateFunction: _getIEXmlDom * Gets IE xml doc object * * Returns: * A Microsoft XML DOM Object * See Also: * http://msdn.microsoft.com/en-us/library/ms757837%28VS.85%29.aspx */ _getIEXmlDom : function() { var doc = null; var docStrings = [ "Msxml2.DOMDocument.6.0", "Msxml2.DOMDocument.5.0", "Msxml2.DOMDocument.4.0", "MSXML2.DOMDocument.3.0", "MSXML2.DOMDocument", "MSXML.DOMDocument", "Microsoft.XMLDOM" ]; for (var d = 0; d < docStrings.length; d++) { if (doc === null) { try { doc = new ActiveXObject(docStrings[d]); } catch (e) { doc = null; } } else { break; } } return doc; }, /** Function: xmlElement * Create an XML DOM element. * * This function creates an XML DOM element correctly across all * implementations. Note that these are not HTML DOM elements, which * aren't appropriate for XMPP stanzas. * * Parameters: * (String) name - The name for the element. * (Array|Object) attrs - An optional array or object containing * key/value pairs to use as element attributes. The object should * be in the format {'key': 'value'} or {key: 'value'}. The array * should have the format [['key1', 'value1'], ['key2', 'value2']]. * (String) text - The text child data for the element. * * Returns: * A new XML DOM element. */ xmlElement: function (name) { if (!name) { return null; } var node = Strophe.xmlGenerator().createElement(name); // FIXME: this should throw errors if args are the wrong type or // there are more than two optional args var a, i, k; for (a = 1; a < arguments.length; a++) { if (!arguments[a]) { continue; } if (typeof(arguments[a]) == "string" || typeof(arguments[a]) == "number") { node.appendChild(Strophe.xmlTextNode(arguments[a])); } else if (typeof(arguments[a]) == "object" && typeof(arguments[a].sort) == "function") { for (i = 0; i < arguments[a].length; i++) { if (typeof(arguments[a][i]) == "object" && typeof(arguments[a][i].sort) == "function") { node.setAttribute(arguments[a][i][0], arguments[a][i][1]); } } } else if (typeof(arguments[a]) == "object") { for (k in arguments[a]) { if (arguments[a].hasOwnProperty(k)) { node.setAttribute(k, arguments[a][k]); } } } } return node; }, /* Function: xmlescape * Excapes invalid xml characters. * * Parameters: * (String) text - text to escape. * * Returns: * Escaped text. */ xmlescape: function(text) { text = text.replace(/\&/g, "&"); text = text.replace(//g, ">"); return text; }, /** Function: xmlTextNode * Creates an XML DOM text node. * * Provides a cross implementation version of document.createTextNode. * * Parameters: * (String) text - The content of the text node. * * Returns: * A new XML DOM text node. */ xmlTextNode: function (text) { //ensure text is escaped text = Strophe.xmlescape(text); return Strophe.xmlGenerator().createTextNode(text); }, /** Function: getText * Get the concatenation of all text children of an element. * * Parameters: * (XMLElement) elem - A DOM element. * * Returns: * A String with the concatenated text of all text element children. */ getText: function (elem) { if (!elem) { return null; } var str = ""; if (elem._childNodes.length === 0 && elem.nodeType == Strophe.ElementType.TEXT) { str += elem.nodeValue; } for (var i = 0; i < elem._childNodes.length; i++) { if (elem._childNodes[i].nodeType == Strophe.ElementType.TEXT) { str += elem._childNodes[i].nodeValue; } } return str; }, /** Function: copyElement * Copy an XML DOM element. * * This function copies a DOM element and all its descendants and returns * the new copy. * * Parameters: * (XMLElement) elem - A DOM element. * * Returns: * A new, copied DOM element tree. */ copyElement: function (elem) { var i, el; if (elem.nodeType == Strophe.ElementType.NORMAL) { el = Strophe.xmlElement(elem.tagName); for (i = 0; i < elem.attributes.length; i++) { el.setAttribute(elem.attributes[i].nodeName.toLowerCase(), elem.attributes[i].value); } for (i = 0; i < elem._childNodes.length; i++) { el.appendChild(Strophe.copyElement(elem._childNodes[i])); } } else if (elem.nodeType == Strophe.ElementType.TEXT) { el = Strophe.xmlTextNode(elem.nodeValue); } return el; }, /** Function: escapeNode * Escape the node part (also called local part) of a JID. * * Parameters: * (String) node - A node (or local part). * * Returns: * An escaped node (or local part). */ escapeNode: function (node) { return node.replace(/^\s+|\s+$/g, '') .replace(/\\/g, "\\5c") .replace(/ /g, "\\20") .replace(/\"/g, "\\22") .replace(/\&/g, "\\26") .replace(/\'/g, "\\27") .replace(/\//g, "\\2f") .replace(/:/g, "\\3a") .replace(//g, "\\3e") .replace(/@/g, "\\40"); }, /** Function: unescapeNode * Unescape a node part (also called local part) of a JID. * * Parameters: * (String) node - A node (or local part). * * Returns: * An unescaped node (or local part). */ unescapeNode: function (node) { return node.replace(/\\20/g, " ") .replace(/\\22/g, '"') .replace(/\\26/g, "&") .replace(/\\27/g, "'") .replace(/\\2f/g, "/") .replace(/\\3a/g, ":") .replace(/\\3c/g, "<") .replace(/\\3e/g, ">") .replace(/\\40/g, "@") .replace(/\\5c/g, "\\"); }, /** Function: getNodeFromJid * Get the node portion of a JID String. * * Parameters: * (String) jid - A JID. * * Returns: * A String containing the node. */ getNodeFromJid: function (jid) { if (jid.indexOf("@") < 0) { return null; } return jid.split("@")[0]; }, /** Function: getDomainFromJid * Get the domain portion of a JID String. * * Parameters: * (String) jid - A JID. * * Returns: * A String containing the domain. */ getDomainFromJid: function (jid) { var bare = Strophe.getBareJidFromJid(jid); if (bare.indexOf("@") < 0) { return bare; } else { var parts = bare.split("@"); parts.splice(0, 1); return parts.join('@'); } }, /** Function: getResourceFromJid * Get the resource portion of a JID String. * * Parameters: * (String) jid - A JID. * * Returns: * A String containing the resource. */ getResourceFromJid: function (jid) { var s = jid.split("/"); if (s.length < 2) { return null; } s.splice(0, 1); return s.join('/'); }, /** Function: getBareJidFromJid * Get the bare JID from a JID String. * * Parameters: * (String) jid - A JID. * * Returns: * A String containing the bare JID. */ getBareJidFromJid: function (jid) { return jid ? jid.split("/")[0] : null; }, /** Function: log * User overrideable logging function. * * This function is called whenever the Strophe library calls any * of the logging functions. The default implementation of this * function does nothing. If client code wishes to handle the logging * messages, it should override this with * > Strophe.log = function (level, msg) { * > (user code here) * > }; * * Please note that data sent and received over the wire is logged * via Strophe.Connection.rawInput() and Strophe.Connection.rawOutput(). * * The different levels and their meanings are * * DEBUG - Messages useful for debugging purposes. * INFO - Informational messages. This is mostly information like * 'disconnect was called' or 'SASL auth succeeded'. * WARN - Warnings about potential problems. This is mostly used * to report transient connection errors like request timeouts. * ERROR - Some error occurred. * FATAL - A non-recoverable fatal error occurred. * * Parameters: * (Integer) level - The log level of the log message. This will * be one of the values in Strophe.LogLevel. * (String) msg - The log message. */ log: function (level, msg) { return; }, /** Function: debug * Log a message at the Strophe.LogLevel.DEBUG level. * * Parameters: * (String) msg - The log message. */ debug: function(msg) { this.log(this.LogLevel.DEBUG, msg); }, /** Function: info * Log a message at the Strophe.LogLevel.INFO level. * * Parameters: * (String) msg - The log message. */ info: function (msg) { this.log(this.LogLevel.INFO, msg); }, /** Function: warn * Log a message at the Strophe.LogLevel.WARN level. * * Parameters: * (String) msg - The log message. */ warn: function (msg) { this.log(this.LogLevel.WARN, msg); }, /** Function: error * Log a message at the Strophe.LogLevel.ERROR level. * * Parameters: * (String) msg - The log message. */ error: function (msg) { this.log(this.LogLevel.ERROR, msg); }, /** Function: fatal * Log a message at the Strophe.LogLevel.FATAL level. * * Parameters: * (String) msg - The log message. */ fatal: function (msg) { this.log(this.LogLevel.FATAL, msg); }, /** Function: serialize * Render a DOM element and all descendants to a String. * * Parameters: * (XMLElement) elem - A DOM element. * * Returns: * The serialized element tree as a String. */ serialize: function (elem) { var result; if (!elem) { return null; } if (typeof(elem.tree) === "function") { elem = elem.tree(); } var nodeName = elem.nodeName.toLowerCase(); var i, child; if (elem.getAttribute("_realname")) { nodeName = elem.getAttribute("_realname").toLowerCase(); } result = "<" + nodeName.toLowerCase(); for (i = 0; i < elem.attributes.length; i++) { if(elem.attributes[i].nodeName.toLowerCase() != "_realname") { result += " " + elem.attributes[i].nodeName.toLowerCase() + "='" + elem.attributes[i].value .replace(/&/g, "&") .replace(/\'/g, "'") .replace(/ 0) { result += ">"; for (i = 0; i < elem._childNodes.length; i++) { child = elem._childNodes[i]; if (child.nodeType == Strophe.ElementType.NORMAL) { // normal element, so recurse result += Strophe.serialize(child); } else if (child.nodeType == Strophe.ElementType.TEXT) { // text element result += child.nodeValue; } } result += ""; } else { result += "/>"; } return result; }, /** PrivateVariable: _requestId * _Private_ variable that keeps track of the request ids for * connections. */ _requestId: 0, /** PrivateVariable: Strophe.connectionPlugins * _Private_ variable Used to store plugin names that need * initialization on Strophe.Connection construction. */ _connectionPlugins: {}, /** Function: addConnectionPlugin * Extends the Strophe.Connection object with the given plugin. * * Paramaters: * (String) name - The name of the extension. * (Object) ptype - The plugin's prototype. */ addConnectionPlugin: function (name, ptype) { Strophe._connectionPlugins[name] = ptype; } }; /** Class: Strophe.Builder * XML DOM builder. * * This object provides an interface similar to JQuery but for building * DOM element easily and rapidly. All the functions except for toString() * and tree() return the object, so calls can be chained. Here's an * example using the $iq() builder helper. * > $iq({to: 'you', from: 'me', type: 'get', id: '1'}) * > .c('query', {xmlns: 'strophe:example'}) * > .c('example') * > .toString() * The above generates this XML fragment * > * > * > * > * > * The corresponding DOM manipulations to get a similar fragment would be * a lot more tedious and probably involve several helper variables. * * Since adding children makes new operations operate on the child, up() * is provided to traverse up the tree. To add two children, do * > builder.c('child1', ...).up().c('child2', ...) * The next operation on the Builder will be relative to the second child. */ /** Constructor: Strophe.Builder * Create a Strophe.Builder object. * * The attributes should be passed in object notation. For example * > var b = new Builder('message', {to: 'you', from: 'me'}); * or * > var b = new Builder('messsage', {'xml:lang': 'en'}); * * Parameters: * (String) name - The name of the root element. * (Object) attrs - The attributes for the root element in object notation. * * Returns: * A new Strophe.Builder. */ Strophe.Builder = function (name, attrs) { // Set correct namespace for jabber:client elements if (name == "presence" || name == "message" || name == "iq") { if (attrs && !attrs.xmlns) { attrs.xmlns = Strophe.NS.CLIENT; } else if (!attrs) { attrs = {xmlns: Strophe.NS.CLIENT}; } } // Holds the tree being built. this.nodeTree = Strophe.xmlElement(name, attrs); // Points to the current operation node. this.node = this.nodeTree; }; Strophe.Builder.prototype = { /** Function: tree * Return the DOM tree. * * This function returns the current DOM tree as an element object. This * is suitable for passing to functions like Strophe.Connection.send(). * * Returns: * The DOM tree as a element object. */ tree: function () { return this.nodeTree; }, /** Function: toString * Serialize the DOM tree to a String. * * This function returns a string serialization of the current DOM * tree. It is often used internally to pass data to a * Strophe.Request object. * * Returns: * The serialized DOM tree in a String. */ toString: function () { return Strophe.serialize(this.nodeTree); }, /** Function: up * Make the current parent element the new current element. * * This function is often used after c() to traverse back up the tree. * For example, to add two children to the same element * > builder.c('child1', {}).up().c('child2', {}); * * Returns: * The Stophe.Builder object. */ up: function () { this.node = this.node.parentNode; return this; }, /** Function: attrs * Add or modify attributes of the current element. * * The attributes should be passed in object notation. This function * does not move the current element pointer. * * Parameters: * (Object) moreattrs - The attributes to add/modify in object notation. * * Returns: * The Strophe.Builder object. */ attrs: function (moreattrs) { for (var k in moreattrs) { if (moreattrs.hasOwnProperty(k)) { this.node.setAttribute(k, moreattrs[k]); } } return this; }, /** Function: c * Add a child to the current element and make it the new current * element. * * This function moves the current element pointer to the child. If you * need to add another child, it is necessary to use up() to go back * to the parent in the tree. * * Parameters: * (String) name - The name of the child. * (Object) attrs - The attributes of the child in object notation. * * Returns: * The Strophe.Builder object. */ c: function (name, attrs) { var child = Strophe.xmlElement(name, attrs); this.node.appendChild(child); this.node = child; return this; }, /** Function: cnode * Add a child to the current element and make it the new current * element. * * This function is the same as c() except that instead of using a * name and an attributes object to create the child it uses an * existing DOM element object. * * Parameters: * (XMLElement) elem - A DOM element. * * Returns: * The Strophe.Builder object. */ cnode: function (elem) { var xmlGen = Strophe.xmlGenerator(); var newElem = xmlGen.importNode ? xmlGen.importNode(elem, true) : Strophe.copyElement(elem); this.node.appendChild(newElem); this.node = newElem; return this; }, /** Function: t * Add a child text element. * * This *does not* make the child the new current element since there * are no children of text elements. * * Parameters: * (String) text - The text data to append to the current element. * * Returns: * The Strophe.Builder object. */ t: function (text) { var child = Strophe.xmlTextNode(text); this.node.appendChild(child); return this; } }; /** PrivateClass: Strophe.Handler * _Private_ helper class for managing stanza handlers. * * A Strophe.Handler encapsulates a user provided callback function to be * executed when matching stanzas are received by the connection. * Handlers can be either one-off or persistant depending on their * return value. Returning true will cause a Handler to remain active, and * returning false will remove the Handler. * * Users will not use Strophe.Handler objects directly, but instead they * will use Strophe.Connection.addHandler() and * Strophe.Connection.deleteHandler(). */ /** PrivateConstructor: Strophe.Handler * Create and initialize a new Strophe.Handler. * * Parameters: * (Function) handler - A function to be executed when the handler is run. * (String) ns - The namespace to match. * (String) name - The element name to match. * (String) type - The element type to match. * (String) id - The element id attribute to match. * (String) from - The element from attribute to match. * (Object) options - Handler options * * Returns: * A new Strophe.Handler object. */ Strophe.Handler = function (handler, ns, name, type, id, from, options) { this.handler = handler; this.ns = ns; this.name = name; this.type = type; this.id = id; this.options = options || {matchbare: false}; // default matchBare to false if undefined if (!this.options.matchBare) { this.options.matchBare = false; } if (this.options.matchBare) { this.from = from ? Strophe.getBareJidFromJid(from) : null; } else { this.from = from; } // whether the handler is a user handler or a system handler this.user = true; }; Strophe.Handler.prototype = { /** PrivateFunction: isMatch * Tests if a stanza matches the Strophe.Handler. * * Parameters: * (XMLElement) elem - The XML element to test. * * Returns: * true if the stanza matches and false otherwise. */ isMatch: function (elem) { var nsMatch; var from = null; if (this.options.matchBare) { from = Strophe.getBareJidFromJid(elem.getAttribute('from')); } else { from = elem.getAttribute('from'); } nsMatch = false; if (!this.ns) { nsMatch = true; } else { var that = this; Strophe.forEachChild(elem, null, function (elem) { if (elem.getAttribute("xmlns") == that.ns) { nsMatch = true; } }); nsMatch = nsMatch || elem.getAttribute("xmlns") == this.ns; } if (nsMatch && (!this.name || Strophe.isTagEqual(elem, this.name)) && (!this.type || elem.getAttribute("type") == this.type) && (!this.id || elem.getAttribute("id") == this.id) && (!this.from || from == this.from)) { return true; } return false; }, /** PrivateFunction: run * Run the callback on a matching stanza. * * Parameters: * (XMLElement) elem - The DOM element that triggered the * Strophe.Handler. * * Returns: * A boolean indicating if the handler should remain active. */ run: function (elem) { var result = null; try { result = this.handler(elem); } catch (e) { if (e.sourceURL) { Strophe.fatal("error: " + this.handler + " " + e.sourceURL + ":" + e.line + " - " + e.name + ": " + e.message); } else if (e.fileName) { if (typeof(console) != "undefined") { console.trace(); console.error(this.handler, " - error - ", e, e.message); } Strophe.fatal("error: " + this.handler + " " + e.fileName + ":" + e.lineNumber + " - " + e.name + ": " + e.message); } else { Strophe.fatal("error: " + this.handler); } throw e; } return result; }, /** PrivateFunction: toString * Get a String representation of the Strophe.Handler object. * * Returns: * A String. */ toString: function () { return "{Handler: " + this.handler + "(" + this.name + "," + this.id + "," + this.ns + ")}"; } }; /** PrivateClass: Strophe.TimedHandler * _Private_ helper class for managing timed handlers. * * A Strophe.TimedHandler encapsulates a user provided callback that * should be called after a certain period of time or at regular * intervals. The return value of the callback determines whether the * Strophe.TimedHandler will continue to fire. * * Users will not use Strophe.TimedHandler objects directly, but instead * they will use Strophe.Connection.addTimedHandler() and * Strophe.Connection.deleteTimedHandler(). */ /** PrivateConstructor: Strophe.TimedHandler * Create and initialize a new Strophe.TimedHandler object. * * Parameters: * (Integer) period - The number of milliseconds to wait before the * handler is called. * (Function) handler - The callback to run when the handler fires. This * function should take no arguments. * * Returns: * A new Strophe.TimedHandler object. */ Strophe.TimedHandler = function (period, handler) { this.period = period; this.handler = handler; this.lastCalled = new Date().getTime(); this.user = true; }; Strophe.TimedHandler.prototype = { /** PrivateFunction: run * Run the callback for the Strophe.TimedHandler. * * Returns: * true if the Strophe.TimedHandler should be called again, and false * otherwise. */ run: function () { this.lastCalled = new Date().getTime(); return this.handler(); }, /** PrivateFunction: reset * Reset the last called time for the Strophe.TimedHandler. */ reset: function () { this.lastCalled = new Date().getTime(); }, /** PrivateFunction: toString * Get a string representation of the Strophe.TimedHandler object. * * Returns: * The string representation. */ toString: function () { return "{TimedHandler: " + this.handler + "(" + this.period +")}"; } }; /** PrivateClass: Strophe.Request * _Private_ helper class that provides a cross implementation abstraction * for a BOSH related XMLHttpRequest. * * The Strophe.Request class is used internally to encapsulate BOSH request * information. It is not meant to be used from user's code. */ /** PrivateConstructor: Strophe.Request * Create and initialize a new Strophe.Request object. * * Parameters: * (XMLElement) elem - The XML data to be sent in the request. * (Function) func - The function that will be called when the * XMLHttpRequest readyState changes. * (Integer) rid - The BOSH rid attribute associated with this request. * (Integer) sends - The number of times this same request has been * sent. */ Strophe.Request = function (elem, func, rid, sends) { this.id = ++Strophe._requestId; this.xmlData = elem; this.data = Strophe.serialize(elem); // save original function in case we need to make a new request // from this one. this.origFunc = func; this.func = func; this.rid = rid; this.date = NaN; this.sends = sends || 0; this.abort = false; this.dead = null; this.age = function () { if (!this.date) { return 0; } var now = new Date(); return (now - this.date) / 1000; }; this.timeDead = function () { if (!this.dead) { return 0; } var now = new Date(); return (now - this.dead) / 1000; }; this.xhr = this._newXHR(); }; Strophe.Request.prototype = { /** PrivateFunction: getResponse * Get a response from the underlying XMLHttpRequest. * * This function attempts to get a response from the request and checks * for errors. * * Throws: * "parsererror" - A parser error occured. * * Returns: * The DOM element tree of the response. */ getResponse: function () { // console.log("getResponse:", this.xhr.responseXML, ":", this.xhr.responseText); var node = null; if (this.xhr.responseXML && this.xhr.responseXML.documentElement) { node = this.xhr.responseXML.documentElement; if (node.tagName == "parsererror") { Strophe.error("invalid response received"); Strophe.error("responseText: " + this.xhr.responseText); Strophe.error("responseXML: " + Strophe.serialize(this.xhr.responseXML)); throw "parsererror"; } } else if (this.xhr.responseText) { // Hack for node. var _div = document.createElement("div"); _div.innerHTML = this.xhr.responseText; node = _div._childNodes[0]; Strophe.error("invalid response received"); Strophe.error("responseText: " + this.xhr.responseText); Strophe.error("responseXML: " + Strophe.serialize(this.xhr.responseXML)); } return node; }, /** PrivateFunction: _newXHR * _Private_ helper function to create XMLHttpRequests. * * This function creates XMLHttpRequests across all implementations. * * Returns: * A new XMLHttpRequest. */ _newXHR: function () { var xhr = null; if (window.XMLHttpRequest) { xhr = new XMLHttpRequest(); if (xhr.overrideMimeType) { xhr.overrideMimeType("text/xml"); } } else if (window.ActiveXObject) { xhr = new ActiveXObject("Microsoft.XMLHTTP"); } // use Function.bind() to prepend ourselves as an argument xhr.onreadystatechange = this.func.bind(null, this); return xhr; } }; /** Class: Strophe.Connection * XMPP Connection manager. * * Thie class is the main part of Strophe. It manages a BOSH connection * to an XMPP server and dispatches events to the user callbacks as * data arrives. It supports SASL PLAIN, SASL DIGEST-MD5, and legacy * authentication. * * After creating a Strophe.Connection object, the user will typically * call connect() with a user supplied callback to handle connection level * events like authentication failure, disconnection, or connection * complete. * * The user will also have several event handlers defined by using * addHandler() and addTimedHandler(). These will allow the user code to * respond to interesting stanzas or do something periodically with the * connection. These handlers will be active once authentication is * finished. * * To send data to the connection, use send(). */ /** Constructor: Strophe.Connection * Create and initialize a Strophe.Connection object. * * Parameters: * (String) service - The BOSH service URL. * * Returns: * A new Strophe.Connection object. */ Strophe.Connection = function (service) { /* The path to the httpbind service. */ this.service = service; /* The connected JID. */ this.jid = ""; /* request id for body tags */ this.rid = Math.floor(Math.random() * 4294967295); /* The current session ID. */ this.sid = null; this.streamId = null; /* stream:features */ this.features = null; // SASL this.do_session = false; this.do_bind = false; // handler lists this.timedHandlers = []; this.handlers = []; this.removeTimeds = []; this.removeHandlers = []; this.addTimeds = []; this.addHandlers = []; this._idleTimeout = null; this._disconnectTimeout = null; this.authenticated = false; this.disconnecting = false; this.connected = false; this.errors = 0; this.paused = false; // default BOSH values this.hold = 1; this.wait = 60; this.window = 5; this._data = []; this._requests = []; this._uniqueId = Math.round(Math.random() * 10000); this._sasl_success_handler = null; this._sasl_failure_handler = null; this._sasl_challenge_handler = null; // setup onIdle callback every 1/10th of a second this._idleTimeout = setTimeout(this._onIdle.bind(this), 100); // initialize plugins for (var k in Strophe._connectionPlugins) { if (Strophe._connectionPlugins.hasOwnProperty(k)) { var ptype = Strophe._connectionPlugins[k]; // jslint complaints about the below line, but this is fine var F = function () {}; F.prototype = ptype; this[k] = new F(); this[k].init(this); } } }; Strophe.Connection.prototype = { /** Function: reset * Reset the connection. * * This function should be called after a connection is disconnected * before that connection is reused. */ reset: function () { this.rid = Math.floor(Math.random() * 4294967295); this.sid = null; this.streamId = null; // SASL this.do_session = false; this.do_bind = false; // handler lists this.timedHandlers = []; this.handlers = []; this.removeTimeds = []; this.removeHandlers = []; this.addTimeds = []; this.addHandlers = []; this.authenticated = false; this.disconnecting = false; this.connected = false; this.errors = 0; this._requests = []; this._uniqueId = Math.round(Math.random()*10000); }, /** Function: pause * Pause the request manager. * * This will prevent Strophe from sending any more requests to the * server. This is very useful for temporarily pausing while a lot * of send() calls are happening quickly. This causes Strophe to * send the data in a single request, saving many request trips. */ pause: function () { this.paused = true; }, /** Function: resume * Resume the request manager. * * This resumes after pause() has been called. */ resume: function () { this.paused = false; }, /** Function: getUniqueId * Generate a unique ID for use in elements. * * All stanzas are required to have unique id attributes. This * function makes creating these easy. Each connection instance has * a counter which starts from zero, and the value of this counter * plus a colon followed by the suffix becomes the unique id. If no * suffix is supplied, the counter is used as the unique id. * * Suffixes are used to make debugging easier when reading the stream * data, and their use is recommended. The counter resets to 0 for * every new connection for the same reason. For connections to the * same server that authenticate the same way, all the ids should be * the same, which makes it easy to see changes. This is useful for * automated testing as well. * * Parameters: * (String) suffix - A optional suffix to append to the id. * * Returns: * A unique string to be used for the id attribute. */ getUniqueId: function (suffix) { if (typeof(suffix) == "string" || typeof(suffix) == "number") { return ++this._uniqueId + ":" + suffix; } else { return ++this._uniqueId + ""; } }, /** Function: connect * Starts the connection process. * * As the connection process proceeds, the user supplied callback will * be triggered multiple times with status updates. The callback * should take two arguments - the status code and the error condition. * * The status code will be one of the values in the Strophe.Status * constants. The error condition will be one of the conditions * defined in RFC 3920 or the condition 'strophe-parsererror'. * * Please see XEP 124 for a more detailed explanation of the optional * parameters below. * * Parameters: * (String) jid - The user's JID. This may be a bare JID, * or a full JID. If a node is not supplied, SASL ANONYMOUS * authentication will be attempted. * (String) pass - The user's password. * (Function) callback The connect callback function. * (Integer) wait - The optional HTTPBIND wait value. This is the * time the server will wait before returning an empty result for * a request. The default setting of 60 seconds is recommended. * Other settings will require tweaks to the Strophe.TIMEOUT value. * (Integer) hold - The optional HTTPBIND hold value. This is the * number of connections the server will hold at one time. This * should almost always be set to 1 (the default). */ connect: function (jid, pass, callback, wait, hold, route) { this.jid = jid; this.pass = pass; this.connect_callback = callback; this.disconnecting = false; this.connected = false; this.authenticated = false; this.errors = 0; this.wait = wait || this.wait; this.hold = hold || this.hold; // parse jid for domain and resource this.domain = Strophe.getDomainFromJid(this.jid); // build the body tag var body_attrs = { to: this.domain, "xml:lang": "en", wait: this.wait, hold: this.hold, content: "text/xml; charset=utf-8", ver: "1.6", "xmpp:version": "1.0", "xmlns:xmpp": Strophe.NS.BOSH }; if (route) { body_attrs.route = route; } var body = this._buildBody().attrs(body_attrs); this._changeConnectStatus(Strophe.Status.CONNECTING, null); this._requests.push( new Strophe.Request(body.tree(), this._onRequestStateChange.bind( this, this._connect_cb.bind(this)), body.tree().getAttribute("rid"))); this._throttledRequestHandler(); }, /** Function: attach * Attach to an already created and authenticated BOSH session. * * This function is provided to allow Strophe to attach to BOSH * sessions which have been created externally, perhaps by a Web * application. This is often used to support auto-login type features * without putting user credentials into the page. * * Parameters: * (String) jid - The full JID that is bound by the session. * (String) sid - The SID of the BOSH session. * (String) rid - The current RID of the BOSH session. This RID * will be used by the next request. * (Function) callback The connect callback function. * (Integer) wait - The optional HTTPBIND wait value. This is the * time the server will wait before returning an empty result for * a request. The default setting of 60 seconds is recommended. * Other settings will require tweaks to the Strophe.TIMEOUT value. * (Integer) hold - The optional HTTPBIND hold value. This is the * number of connections the server will hold at one time. This * should almost always be set to 1 (the default). * (Integer) wind - The optional HTTBIND window value. This is the * allowed range of request ids that are valid. The default is 5. */ attach: function (jid, sid, rid, callback, wait, hold, wind) { this.jid = jid; this.sid = sid; this.rid = rid; this.connect_callback = callback; this.domain = Strophe.getDomainFromJid(this.jid); this.authenticated = true; this.connected = true; this.wait = wait || this.wait; this.hold = hold || this.hold; this.window = wind || this.window; this._changeConnectStatus(Strophe.Status.ATTACHED, null); }, /** Function: xmlInput * User overrideable function that receives XML data coming into the * connection. * * The default function does nothing. User code can override this with * > Strophe.Connection.xmlInput = function (elem) { * > (user code) * > }; * * Parameters: * (XMLElement) elem - The XML data received by the connection. */ xmlInput: function (elem) { return; }, /** Function: xmlOutput * User overrideable function that receives XML data sent to the * connection. * * The default function does nothing. User code can override this with * > Strophe.Connection.xmlOutput = function (elem) { * > (user code) * > }; * * Parameters: * (XMLElement) elem - The XMLdata sent by the connection. */ xmlOutput: function (elem) { return; }, /** Function: rawInput * User overrideable function that receives raw data coming into the * connection. * * The default function does nothing. User code can override this with * > Strophe.Connection.rawInput = function (data) { * > (user code) * > }; * * Parameters: * (String) data - The data received by the connection. */ rawInput: function (data) { return; }, /** Function: rawOutput * User overrideable function that receives raw data sent to the * connection. * * The default function does nothing. User code can override this with * > Strophe.Connection.rawOutput = function (data) { * > (user code) * > }; * * Parameters: * (String) data - The data sent by the connection. */ rawOutput: function (data) { return; }, /** Function: send * Send a stanza. * * This function is called to push data onto the send queue to * go out over the wire. Whenever a request is sent to the BOSH * server, all pending data is sent and the queue is flushed. * * Parameters: * (XMLElement | * [XMLElement] | * Strophe.Builder) elem - The stanza to send. */ send: function (elem) { if (elem === null) { return ; } if (typeof(elem.sort) === "function") { for (var i = 0; i < elem.length; i++) { this._queueData(elem[i]); } } else if (typeof(elem.tree) === "function") { this._queueData(elem.tree()); } else { this._queueData(elem); } this._throttledRequestHandler(); clearTimeout(this._idleTimeout); this._idleTimeout = setTimeout(this._onIdle.bind(this), 100); }, /** Function: flush * Immediately send any pending outgoing data. * * Normally send() queues outgoing data until the next idle period * (100ms), which optimizes network use in the common cases when * several send()s are called in succession. flush() can be used to * immediately send all pending data. */ flush: function () { // cancel the pending idle period and run the idle function // immediately clearTimeout(this._idleTimeout); this._onIdle(); }, /** Function: sendIQ * Helper function to send IQ stanzas. * * Parameters: * (XMLElement) elem - The stanza to send. * (Function) callback - The callback function for a successful request. * (Function) errback - The callback function for a failed or timed * out request. On timeout, the stanza will be null. * (Integer) timeout - The time specified in milliseconds for a * timeout to occur. * * Returns: * The id used to send the IQ. */ sendIQ: function(elem, callback, errback, timeout) { var timeoutHandler = null; var that = this; if (typeof(elem.tree) === "function") { elem = elem.tree(); } var id = elem.getAttribute('id'); // inject id if not found if (!id) { id = this.getUniqueId("sendIQ"); elem.setAttribute("id", id); } var handler = this.addHandler(function (stanza) { // remove timeout handler if there is one if (timeoutHandler) { that.deleteTimedHandler(timeoutHandler); } var iqtype = stanza.getAttribute('type'); if (iqtype == 'result') { if (callback) { callback(stanza); } } else if (iqtype == 'error') { if (errback) { errback(stanza); } } else { throw { name: "StropheError", message: "Got bad IQ type of " + iqtype }; } }, null, 'iq', null, id); // if timeout specified, setup timeout handler. if (timeout) { timeoutHandler = this.addTimedHandler(timeout, function () { // get rid of normal handler that.deleteHandler(handler); // call errback on timeout with null stanza if (errback) { errback(null); } return false; }); } this.send(elem); return id; }, /** PrivateFunction: _queueData * Queue outgoing data for later sending. Also ensures that the data * is a DOMElement. */ _queueData: function (element) { if (element === null || !element.tagName || !element._childNodes) { throw { name: "StropheError", message: "Cannot queue non-DOMElement." }; } this._data.push(element); }, /** PrivateFunction: _sendRestart * Send an xmpp:restart stanza. */ _sendRestart: function () { this._data.push("restart"); this._throttledRequestHandler(); clearTimeout(this._idleTimeout); this._idleTimeout = setTimeout(this._onIdle.bind(this), 100); }, /** Function: addTimedHandler * Add a timed handler to the connection. * * This function adds a timed handler. The provided handler will * be called every period milliseconds until it returns false, * the connection is terminated, or the handler is removed. Handlers * that wish to continue being invoked should return true. * * Because of method binding it is necessary to save the result of * this function if you wish to remove a handler with * deleteTimedHandler(). * * Note that user handlers are not active until authentication is * successful. * * Parameters: * (Integer) period - The period of the handler. * (Function) handler - The callback function. * * Returns: * A reference to the handler that can be used to remove it. */ addTimedHandler: function (period, handler) { var thand = new Strophe.TimedHandler(period, handler); this.addTimeds.push(thand); return thand; }, /** Function: deleteTimedHandler * Delete a timed handler for a connection. * * This function removes a timed handler from the connection. The * handRef parameter is *not* the function passed to addTimedHandler(), * but is the reference returned from addTimedHandler(). * * Parameters: * (Strophe.TimedHandler) handRef - The handler reference. */ deleteTimedHandler: function (handRef) { // this must be done in the Idle loop so that we don't change // the handlers during iteration this.removeTimeds.push(handRef); }, /** Function: addHandler * Add a stanza handler for the connection. * * This function adds a stanza handler to the connection. The * handler callback will be called for any stanza that matches * the parameters. Note that if multiple parameters are supplied, * they must all match for the handler to be invoked. * * The handler will receive the stanza that triggered it as its argument. * The handler should return true if it is to be invoked again; * returning false will remove the handler after it returns. * * As a convenience, the ns parameters applies to the top level element * and also any of its immediate children. This is primarily to make * matching /iq/query elements easy. * * The options argument contains handler matching flags that affect how * matches are determined. Currently the only flag is matchBare (a * boolean). When matchBare is true, the from parameter and the from * attribute on the stanza will be matched as bare JIDs instead of * full JIDs. To use this, pass {matchBare: true} as the value of * options. The default value for matchBare is false. * * The return value should be saved if you wish to remove the handler * with deleteHandler(). * * Parameters: * (Function) handler - The user callback. * (String) ns - The namespace to match. * (String) name - The stanza name to match. * (String) type - The stanza type attribute to match. * (String) id - The stanza id attribute to match. * (String) from - The stanza from attribute to match. * (String) options - The handler options * * Returns: * A reference to the handler that can be used to remove it. */ addHandler: function (handler, ns, name, type, id, from, options) { var hand = new Strophe.Handler(handler, ns, name, type, id, from, options); this.addHandlers.push(hand); return hand; }, /** Function: deleteHandler * Delete a stanza handler for a connection. * * This function removes a stanza handler from the connection. The * handRef parameter is *not* the function passed to addHandler(), * but is the reference returned from addHandler(). * * Parameters: * (Strophe.Handler) handRef - The handler reference. */ deleteHandler: function (handRef) { // this must be done in the Idle loop so that we don't change // the handlers during iteration this.removeHandlers.push(handRef); }, /** Function: disconnect * Start the graceful disconnection process. * * This function starts the disconnection process. This process starts * by sending unavailable presence and sending BOSH body of type * terminate. A timeout handler makes sure that disconnection happens * even if the BOSH server does not respond. * * The user supplied connection callback will be notified of the * progress as this process happens. * * Parameters: * (String) reason - The reason the disconnect is occuring. */ disconnect: function (reason) { this._changeConnectStatus(Strophe.Status.DISCONNECTING, reason); Strophe.info("Disconnect was called because: " + reason); if (this.connected) { // setup timeout handler this._disconnectTimeout = this._addSysTimedHandler( 3000, this._onDisconnectTimeout.bind(this)); this._sendTerminate(); } }, /** PrivateFunction: _changeConnectStatus * _Private_ helper function that makes sure plugins and the user's * callback are notified of connection status changes. * * Parameters: * (Integer) status - the new connection status, one of the values * in Strophe.Status * (String) condition - the error condition or null */ _changeConnectStatus: function (status, condition) { // notify all plugins listening for status changes for (var k in Strophe._connectionPlugins) { if (Strophe._connectionPlugins.hasOwnProperty(k)) { var plugin = this[k]; if (plugin.statusChanged) { try { plugin.statusChanged(status, condition); } catch (err) { Strophe.error("" + k + " plugin caused an exception " + "changing status: " + err); } } } } // notify the user's callback if (this.connect_callback) { try { this.connect_callback(status, condition); } catch (e) { Strophe.error("User connection callback caused an " + "exception: " + e); } } }, /** PrivateFunction: _buildBody * _Private_ helper function to generate the wrapper for BOSH. * * Returns: * A Strophe.Builder with a element. */ _buildBody: function () { var bodyWrap = $build('body', { rid: this.rid++, xmlns: Strophe.NS.HTTPBIND }); if (this.sid !== null) { bodyWrap.attrs({sid: this.sid}); } return bodyWrap; }, /** PrivateFunction: _removeRequest * _Private_ function to remove a request from the queue. * * Parameters: * (Strophe.Request) req - The request to remove. */ _removeRequest: function (req) { Strophe.debug("removing request"); var i; for (i = this._requests.length - 1; i >= 0; i--) { if (req == this._requests[i]) { this._requests.splice(i, 1); } } // IE6 fails on setting to null, so set to empty function req.xhr.onreadystatechange = function () {}; this._throttledRequestHandler(); }, /** PrivateFunction: _restartRequest * _Private_ function to restart a request that is presumed dead. * * Parameters: * (Integer) i - The index of the request in the queue. */ _restartRequest: function (i) { var req = this._requests[i]; if (req.dead === null) { req.dead = new Date(); } this._processRequest(i); }, /** PrivateFunction: _processRequest * _Private_ function to process a request in the queue. * * This function takes requests off the queue and sends them and * restarts dead requests. * * Parameters: * (Integer) i - The index of the request in the queue. */ _processRequest: function (i) { var req = this._requests[i]; var reqStatus = -1; try { if (req.xhr.readyState == 4) { reqStatus = req.xhr.status; } } catch (e) { Strophe.error("caught an error in _requests[" + i + "], reqStatus: " + reqStatus); } if (typeof(reqStatus) == "undefined") { reqStatus = -1; } // make sure we limit the number of retries if (req.sends > 5) { this._onDisconnectTimeout(); return; } var time_elapsed = req.age(); var primaryTimeout = (!isNaN(time_elapsed) && time_elapsed > Math.floor(Strophe.TIMEOUT * this.wait)); var secondaryTimeout = (req.dead !== null && req.timeDead() > Math.floor(Strophe.SECONDARY_TIMEOUT * this.wait)); var requestCompletedWithServerError = (req.xhr.readyState == 4 && (reqStatus < 1 || reqStatus >= 500)); if (primaryTimeout || secondaryTimeout || requestCompletedWithServerError) { if (secondaryTimeout) { Strophe.error("Request " + this._requests[i].id + " timed out (secondary), restarting"); } req.abort = true; req.xhr.abort(); // setting to null fails on IE6, so set to empty function req.xhr.onreadystatechange = function () {}; this._requests[i] = new Strophe.Request(req.xmlData, req.origFunc, req.rid, req.sends); req = this._requests[i]; } if (req.xhr.readyState === 0) { Strophe.debug("request id " + req.id + "." + req.sends + " posting"); req.date = new Date(); try { req.xhr.open("POST", this.service, true); } catch (e2) { Strophe.error("XHR open failed."); if (!this.connected) { this._changeConnectStatus(Strophe.Status.CONNFAIL, "bad-service"); } this.disconnect(); return; } // Fires the XHR request -- may be invoked immediately // or on a gradually expanding retry window for reconnects var sendFunc = function () { req.xhr.send(req.data); }; // Implement progressive backoff for reconnects -- // First retry (send == 1) should also be instantaneous if (req.sends > 1) { // Using a cube of the retry number creats a nicely // expanding retry window var backoff = Math.pow(req.sends, 3) * 1000; setTimeout(sendFunc, backoff); } else { sendFunc(); } req.sends++; this.xmlOutput(req.xmlData); this.rawOutput(req.data); } else { Strophe.debug("_processRequest: " + (i === 0 ? "first" : "second") + " request has readyState of " + req.xhr.readyState); } }, /** PrivateFunction: _throttledRequestHandler * _Private_ function to throttle requests to the connection window. * * This function makes sure we don't send requests so fast that the * request ids overflow the connection window in the case that one * request died. */ _throttledRequestHandler: function () { if (!this._requests) { Strophe.debug("_throttledRequestHandler called with " + "undefined requests"); } else { Strophe.debug("_throttledRequestHandler called with " + this._requests.length + " requests"); } if (!this._requests || this._requests.length === 0) { return; } if (this._requests.length > 0) { this._processRequest(0); } if (this._requests.length > 1 && Math.abs(this._requests[0].rid - this._requests[1].rid) < this.window) { this._processRequest(1); } }, /** PrivateFunction: _onRequestStateChange * _Private_ handler for Strophe.Request state changes. * * This function is called when the XMLHttpRequest readyState changes. * It contains a lot of error handling logic for the many ways that * requests can fail, and calls the request callback when requests * succeed. * * Parameters: * (Function) func - The handler for the request. * (Strophe.Request) req - The request that is changing readyState. */ _onRequestStateChange: function (func, req) { Strophe.debug("request id " + req.id + "." + req.sends + " state changed to " + req.xhr.readyState); if (req.abort) { req.abort = false; return; } // request complete var reqStatus; if (req.xhr.readyState == 4) { reqStatus = 0; try { reqStatus = req.xhr.status; } catch (e) { // ignore errors from undefined status attribute. works // around a browser bug } if (typeof(reqStatus) == "undefined") { reqStatus = 0; } if (this.disconnecting) { if (reqStatus >= 400) { this._hitError(reqStatus); return; } } var reqIs0 = (this._requests[0] == req); var reqIs1 = (this._requests[1] == req); if ((reqStatus > 0 && reqStatus < 500) || req.sends > 5) { // remove from internal queue this._removeRequest(req); Strophe.debug("request id " + req.id + " should now be removed"); } // request succeeded if (reqStatus == 200) { // if request 1 finished, or request 0 finished and request // 1 is over Strophe.SECONDARY_TIMEOUT seconds old, we need to // restart the other - both will be in the first spot, as the // completed request has been removed from the queue already if (reqIs1 || (reqIs0 && this._requests.length > 0 && this._requests[0].age() > Math.floor(Strophe.SECONDARY_TIMEOUT * this.wait))) { this._restartRequest(0); } // call handler Strophe.debug("request id " + req.id + "." + req.sends + " got 200"); func(req); this.errors = 0; } else { Strophe.error("request id " + req.id + "." + req.sends + " error " + reqStatus + " happened"); if (reqStatus === 0 || (reqStatus >= 400 && reqStatus < 600) || reqStatus >= 12000) { this._hitError(reqStatus); if (reqStatus >= 400 && reqStatus < 500) { this._changeConnectStatus(Strophe.Status.DISCONNECTING, null); this._doDisconnect(); } } } if (!((reqStatus > 0 && reqStatus < 500) || req.sends > 5)) { this._throttledRequestHandler(); } } }, /** PrivateFunction: _hitError * _Private_ function to handle the error count. * * Requests are resent automatically until their error count reaches * 5. Each time an error is encountered, this function is called to * increment the count and disconnect if the count is too high. * * Parameters: * (Integer) reqStatus - The request status. */ _hitError: function (reqStatus) { this.errors++; Strophe.warn("request errored, status: " + reqStatus + ", number of errors: " + this.errors); if (this.errors > 4) { this._onDisconnectTimeout(); } }, /** PrivateFunction: _doDisconnect * _Private_ function to disconnect. * * This is the last piece of the disconnection logic. This resets the * connection and alerts the user's connection callback. */ _doDisconnect: function () { Strophe.info("_doDisconnect was called"); this.authenticated = false; this.disconnecting = false; this.sid = null; this.streamId = null; this.rid = Math.floor(Math.random() * 4294967295); // tell the parent we disconnected if (this.connected) { this._changeConnectStatus(Strophe.Status.DISCONNECTED, null); this.connected = false; } // delete handlers this.handlers = []; this.timedHandlers = []; this.removeTimeds = []; this.removeHandlers = []; this.addTimeds = []; this.addHandlers = []; }, /** PrivateFunction: _dataRecv * _Private_ handler to processes incoming data from the the connection. * * Except for _connect_cb handling the initial connection request, * this function handles the incoming data for all requests. This * function also fires stanza handlers that match each incoming * stanza. * * Parameters: * (Strophe.Request) req - The request that has data ready. */ _dataRecv: function (req) { try { var elem = req.getResponse(); } catch (e) { if (e != "parsererror") { throw e; } this.disconnect("strophe-parsererror"); } if (elem === null) { return; } this.xmlInput(elem); this.rawInput(Strophe.serialize(elem)); // remove handlers scheduled for deletion var i, hand; while (this.removeHandlers.length > 0) { hand = this.removeHandlers.pop(); i = this.handlers.indexOf(hand); if (i >= 0) { this.handlers.splice(i, 1); } } // add handlers scheduled for addition while (this.addHandlers.length > 0) { this.handlers.push(this.addHandlers.pop()); } // handle graceful disconnect if (this.disconnecting && this._requests.length === 0) { this.deleteTimedHandler(this._disconnectTimeout); this._disconnectTimeout = null; this._doDisconnect(); return; } var typ = elem.getAttribute("type"); var cond, conflict; if (typ !== null && typ == "terminate") { // Don't process stanzas that come in after disconnect if (this.disconnecting) { return; } // an error occurred cond = elem.getAttribute("condition"); conflict = elem.getElementsByTagName("conflict"); if (cond !== null) { if (cond == "remote-stream-error" && conflict.length > 0) { cond = "conflict"; } this._changeConnectStatus(Strophe.Status.CONNFAIL, cond); } else { this._changeConnectStatus(Strophe.Status.CONNFAIL, "unknown"); } this.disconnect(); return; } // send each incoming stanza through the handler chain var that = this; Strophe.forEachChild(elem, null, function (child) { var i, newList; // process handlers newList = that.handlers; that.handlers = []; for (i = 0; i < newList.length; i++) { var hand = newList[i]; if (hand.isMatch(child) && (that.authenticated || !hand.user)) { if (hand.run(child)) { that.handlers.push(hand); } } else { that.handlers.push(hand); } } }); }, /** PrivateFunction: _sendTerminate * _Private_ function to send initial disconnect sequence. * * This is the first step in a graceful disconnect. It sends * the BOSH server a terminate body and includes an unavailable * presence if authentication has completed. */ _sendTerminate: function () { Strophe.info("_sendTerminate was called"); var body = this._buildBody().attrs({type: "terminate"}); if (this.authenticated) { body.c('presence', { xmlns: Strophe.NS.CLIENT, type: 'unavailable' }); } this.disconnecting = true; var req = new Strophe.Request(body.tree(), this._onRequestStateChange.bind( this, this._dataRecv.bind(this)), body.tree().getAttribute("rid")); this._requests.push(req); this._throttledRequestHandler(); }, /** PrivateFunction: _connect_cb * _Private_ handler for initial connection request. * * This handler is used to process the initial connection request * response from the BOSH server. It is used to set up authentication * handlers and start the authentication process. * * SASL authentication will be attempted if available, otherwise * the code will fall back to legacy authentication. * * Parameters: * (Strophe.Request) req - The current request. */ _connect_cb: function (req) { Strophe.info("_connect_cb was called"); this.connected = true; var bodyWrap = req.getResponse(); if (!bodyWrap) { return; } this.xmlInput(bodyWrap); this.rawInput(Strophe.serialize(bodyWrap)); var typ = bodyWrap.getAttribute("type"); var cond, conflict; if (typ !== null && typ == "terminate") { // an error occurred cond = bodyWrap.getAttribute("condition"); conflict = bodyWrap.getElementsByTagName("conflict"); if (cond !== null) { if (cond == "remote-stream-error" && conflict.length > 0) { cond = "conflict"; } this._changeConnectStatus(Strophe.Status.CONNFAIL, cond); } else { this._changeConnectStatus(Strophe.Status.CONNFAIL, "unknown"); } return; } // check to make sure we don't overwrite these if _connect_cb is // called multiple times in the case of missing stream:features if (!this.sid) { this.sid = bodyWrap.getAttribute("sid"); } if (!this.stream_id) { this.stream_id = bodyWrap.getAttribute("authid"); } var wind = bodyWrap.getAttribute('requests'); if (wind) { this.window = parseInt(wind, 10); } var hold = bodyWrap.getAttribute('hold'); if (hold) { this.hold = parseInt(hold, 10); } var wait = bodyWrap.getAttribute('wait'); if (wait) { this.wait = parseInt(wait, 10); } var do_sasl_plain = false; var do_sasl_digest_md5 = false; var do_sasl_anonymous = false; var mechanisms = bodyWrap.getElementsByTagName("mechanism"); var i, mech, auth_str, hashed_auth_str; if (mechanisms.length > 0) { for (i = 0; i < mechanisms.length; i++) { mech = Strophe.getText(mechanisms[i]); if (mech == 'DIGEST-MD5') { do_sasl_digest_md5 = true; } else if (mech == 'PLAIN') { do_sasl_plain = true; } else if (mech == 'ANONYMOUS') { do_sasl_anonymous = true; } } } else { // we didn't get stream:features yet, so we need wait for it // by sending a blank poll request var body = this._buildBody(); this._requests.push( new Strophe.Request(body.tree(), this._onRequestStateChange.bind( this, this._connect_cb.bind(this)), body.tree().getAttribute("rid"))); this._throttledRequestHandler(); return; } if (Strophe.getNodeFromJid(this.jid) === null && do_sasl_anonymous) { this._changeConnectStatus(Strophe.Status.AUTHENTICATING, null); this._sasl_success_handler = this._addSysHandler( this._sasl_success_cb.bind(this), null, "success", null, null); this._sasl_failure_handler = this._addSysHandler( this._sasl_failure_cb.bind(this), null, "failure", null, null); this.send($build("auth", { xmlns: Strophe.NS.SASL, mechanism: "ANONYMOUS" }).tree()); } else if (Strophe.getNodeFromJid(this.jid) === null) { // we don't have a node, which is required for non-anonymous // client connections this._changeConnectStatus(Strophe.Status.CONNFAIL, 'x-strophe-bad-non-anon-jid'); this.disconnect(); } else if (do_sasl_digest_md5) { this._changeConnectStatus(Strophe.Status.AUTHENTICATING, null); this._sasl_challenge_handler = this._addSysHandler( this._sasl_challenge1_cb.bind(this), null, "challenge", null, null); this._sasl_failure_handler = this._addSysHandler( this._sasl_failure_cb.bind(this), null, "failure", null, null); this.send($build("auth", { xmlns: Strophe.NS.SASL, mechanism: "DIGEST-MD5" }).tree()); } else if (do_sasl_plain) { // Build the plain auth string (barejid null // username null password) and base 64 encoded. auth_str = Strophe.getBareJidFromJid(this.jid); auth_str = auth_str + "\u0000"; auth_str = auth_str + Strophe.getNodeFromJid(this.jid); auth_str = auth_str + "\u0000"; auth_str = auth_str + this.pass; this._changeConnectStatus(Strophe.Status.AUTHENTICATING, null); this._sasl_success_handler = this._addSysHandler( this._sasl_success_cb.bind(this), null, "success", null, null); this._sasl_failure_handler = this._addSysHandler( this._sasl_failure_cb.bind(this), null, "failure", null, null); hashed_auth_str = Base64.encode(auth_str); this.send($build("auth", { xmlns: Strophe.NS.SASL, mechanism: "PLAIN" }).t(hashed_auth_str).tree()); } else { this._changeConnectStatus(Strophe.Status.AUTHENTICATING, null); this._addSysHandler(this._auth1_cb.bind(this), null, null, null, "_auth_1"); this.send($iq({ type: "get", to: this.domain, id: "_auth_1" }).c("query", { xmlns: Strophe.NS.AUTH }).c("username", {}).t(Strophe.getNodeFromJid(this.jid)).tree()); } }, /** PrivateFunction: _sasl_challenge1_cb * _Private_ handler for DIGEST-MD5 SASL authentication. * * Parameters: * (XMLElement) elem - The challenge stanza. * * Returns: * false to remove the handler. */ _sasl_challenge1_cb: function (elem) { var attribMatch = /([a-z]+)=("[^"]+"|[^,"]+)(?:,|$)/; var challenge = Base64.decode(Strophe.getText(elem)); var cnonce = MD5.hexdigest(Math.random() * 1234567890); var realm = ""; var host = null; var nonce = ""; var qop = ""; var matches; // remove unneeded handlers this.deleteHandler(this._sasl_failure_handler); while (challenge.match(attribMatch)) { matches = challenge.match(attribMatch); challenge = challenge.replace(matches[0], ""); matches[2] = matches[2].replace(/^"(.+)"$/, "$1"); switch (matches[1]) { case "realm": realm = matches[2]; break; case "nonce": nonce = matches[2]; break; case "qop": qop = matches[2]; break; case "host": host = matches[2]; break; } } var digest_uri = "xmpp/" + this.domain; if (host !== null) { digest_uri = digest_uri + "/" + host; } var A1 = MD5.hash(Strophe.getNodeFromJid(this.jid) + ":" + realm + ":" + this.pass) + ":" + nonce + ":" + cnonce; var A2 = 'AUTHENTICATE:' + digest_uri; var responseText = ""; responseText += 'username=' + this._quote(Strophe.getNodeFromJid(this.jid)) + ','; responseText += 'realm=' + this._quote(realm) + ','; responseText += 'nonce=' + this._quote(nonce) + ','; responseText += 'cnonce=' + this._quote(cnonce) + ','; responseText += 'nc="00000001",'; responseText += 'qop="auth",'; responseText += 'digest-uri=' + this._quote(digest_uri) + ','; responseText += 'response=' + this._quote( MD5.hexdigest(MD5.hexdigest(A1) + ":" + nonce + ":00000001:" + cnonce + ":auth:" + MD5.hexdigest(A2))) + ','; responseText += 'charset="utf-8"'; this._sasl_challenge_handler = this._addSysHandler( this._sasl_challenge2_cb.bind(this), null, "challenge", null, null); this._sasl_success_handler = this._addSysHandler( this._sasl_success_cb.bind(this), null, "success", null, null); this._sasl_failure_handler = this._addSysHandler( this._sasl_failure_cb.bind(this), null, "failure", null, null); this.send($build('response', { xmlns: Strophe.NS.SASL }).t(Base64.encode(responseText)).tree()); return false; }, /** PrivateFunction: _quote * _Private_ utility function to backslash escape and quote strings. * * Parameters: * (String) str - The string to be quoted. * * Returns: * quoted string */ _quote: function (str) { return '"' + str.replace(/\\/g, "\\\\").replace(/"/g, '\\"') + '"'; //" end string workaround for emacs }, /** PrivateFunction: _sasl_challenge2_cb * _Private_ handler for second step of DIGEST-MD5 SASL authentication. * * Parameters: * (XMLElement) elem - The challenge stanza. * * Returns: * false to remove the handler. */ _sasl_challenge2_cb: function (elem) { // remove unneeded handlers this.deleteHandler(this._sasl_success_handler); this.deleteHandler(this._sasl_failure_handler); this._sasl_success_handler = this._addSysHandler( this._sasl_success_cb.bind(this), null, "success", null, null); this._sasl_failure_handler = this._addSysHandler( this._sasl_failure_cb.bind(this), null, "failure", null, null); this.send($build('response', {xmlns: Strophe.NS.SASL}).tree()); return false; }, /** PrivateFunction: _auth1_cb * _Private_ handler for legacy authentication. * * This handler is called in response to the initial * for legacy authentication. It builds an authentication and * sends it, creating a handler (calling back to _auth2_cb()) to * handle the result * * Parameters: * (XMLElement) elem - The stanza that triggered the callback. * * Returns: * false to remove the handler. */ _auth1_cb: function (elem) { // build plaintext auth iq var iq = $iq({type: "set", id: "_auth_2"}) .c('query', {xmlns: Strophe.NS.AUTH}) .c('username', {}).t(Strophe.getNodeFromJid(this.jid)) .up() .c('password').t(this.pass); if (!Strophe.getResourceFromJid(this.jid)) { // since the user has not supplied a resource, we pick // a default one here. unlike other auth methods, the server // cannot do this for us. this.jid = Strophe.getBareJidFromJid(this.jid) + '/strophe'; } iq.up().c('resource', {}).t(Strophe.getResourceFromJid(this.jid)); this._addSysHandler(this._auth2_cb.bind(this), null, null, null, "_auth_2"); this.send(iq.tree()); return false; }, /** PrivateFunction: _sasl_success_cb * _Private_ handler for succesful SASL authentication. * * Parameters: * (XMLElement) elem - The matching stanza. * * Returns: * false to remove the handler. */ _sasl_success_cb: function (elem) { Strophe.info("SASL authentication succeeded."); // remove old handlers this.deleteHandler(this._sasl_failure_handler); this._sasl_failure_handler = null; if (this._sasl_challenge_handler) { this.deleteHandler(this._sasl_challenge_handler); this._sasl_challenge_handler = null; } this._addSysHandler(this._sasl_auth1_cb.bind(this), null, "stream:features", null, null); // we must send an xmpp:restart now this._sendRestart(); return false; }, /** PrivateFunction: _sasl_auth1_cb * _Private_ handler to start stream binding. * * Parameters: * (XMLElement) elem - The matching stanza. * * Returns: * false to remove the handler. */ _sasl_auth1_cb: function (elem) { // save stream:features for future usage this.features = elem; var i, child; for (i = 0; i < elem._childNodes.length; i++) { child = elem._childNodes[i]; if (child.nodeName.toLowerCase() == 'bind') { this.do_bind = true; } if (child.nodeName.toLowerCase() == 'session') { this.do_session = true; } } if (!this.do_bind) { this._changeConnectStatus(Strophe.Status.AUTHFAIL, null); return false; } else { this._addSysHandler(this._sasl_bind_cb.bind(this), null, null, null, "_bind_auth_2"); var resource = Strophe.getResourceFromJid(this.jid); if (resource) { this.send($iq({type: "set", id: "_bind_auth_2"}) .c('bind', {xmlns: Strophe.NS.BIND}) .c('resource', {}).t(resource).tree()); } else { this.send($iq({type: "set", id: "_bind_auth_2"}) .c('bind', {xmlns: Strophe.NS.BIND}) .tree()); } } return false; }, /** PrivateFunction: _sasl_bind_cb * _Private_ handler for binding result and session start. * * Parameters: * (XMLElement) elem - The matching stanza. * * Returns: * false to remove the handler. */ _sasl_bind_cb: function (elem) { if (elem.getAttribute("type") == "error") { Strophe.info("SASL binding failed."); this._changeConnectStatus(Strophe.Status.AUTHFAIL, null); return false; } // TODO - need to grab errors var bind = elem.getElementsByTagName("bind"); var jidNode; if (bind.length > 0) { // Grab jid jidNode = bind[0].getElementsByTagName("jid"); if (jidNode.length > 0) { this.jid = Strophe.getText(jidNode[0]); if (this.do_session) { this._addSysHandler(this._sasl_session_cb.bind(this), null, null, null, "_session_auth_2"); this.send($iq({type: "set", id: "_session_auth_2"}) .c('session', {xmlns: Strophe.NS.SESSION}) .tree()); } else { this.authenticated = true; this._changeConnectStatus(Strophe.Status.CONNECTED, null); } } } else { Strophe.info("SASL binding failed."); this._changeConnectStatus(Strophe.Status.AUTHFAIL, null); return false; } }, /** PrivateFunction: _sasl_session_cb * _Private_ handler to finish successful SASL connection. * * This sets Connection.authenticated to true on success, which * starts the processing of user handlers. * * Parameters: * (XMLElement) elem - The matching stanza. * * Returns: * false to remove the handler. */ _sasl_session_cb: function (elem) { if (elem.getAttribute("type") == "result") { this.authenticated = true; this._changeConnectStatus(Strophe.Status.CONNECTED, null); } else if (elem.getAttribute("type") == "error") { Strophe.info("Session creation failed."); this._changeConnectStatus(Strophe.Status.AUTHFAIL, null); return false; } return false; }, /** PrivateFunction: _sasl_failure_cb * _Private_ handler for SASL authentication failure. * * Parameters: * (XMLElement) elem - The matching stanza. * * Returns: * false to remove the handler. */ _sasl_failure_cb: function (elem) { // delete unneeded handlers if (this._sasl_success_handler) { this.deleteHandler(this._sasl_success_handler); this._sasl_success_handler = null; } if (this._sasl_challenge_handler) { this.deleteHandler(this._sasl_challenge_handler); this._sasl_challenge_handler = null; } this._changeConnectStatus(Strophe.Status.AUTHFAIL, null); return false; }, /** PrivateFunction: _auth2_cb * _Private_ handler to finish legacy authentication. * * This handler is called when the result from the jabber:iq:auth * stanza is returned. * * Parameters: * (XMLElement) elem - The stanza that triggered the callback. * * Returns: * false to remove the handler. */ _auth2_cb: function (elem) { if (elem.getAttribute("type") == "result") { this.authenticated = true; this._changeConnectStatus(Strophe.Status.CONNECTED, null); } else if (elem.getAttribute("type") == "error") { this._changeConnectStatus(Strophe.Status.AUTHFAIL, null); this.disconnect(); } return false; }, /** PrivateFunction: _addSysTimedHandler * _Private_ function to add a system level timed handler. * * This function is used to add a Strophe.TimedHandler for the * library code. System timed handlers are allowed to run before * authentication is complete. * * Parameters: * (Integer) period - The period of the handler. * (Function) handler - The callback function. */ _addSysTimedHandler: function (period, handler) { var thand = new Strophe.TimedHandler(period, handler); thand.user = false; this.addTimeds.push(thand); return thand; }, /** PrivateFunction: _addSysHandler * _Private_ function to add a system level stanza handler. * * This function is used to add a Strophe.Handler for the * library code. System stanza handlers are allowed to run before * authentication is complete. * * Parameters: * (Function) handler - The callback function. * (String) ns - The namespace to match. * (String) name - The stanza name to match. * (String) type - The stanza type attribute to match. * (String) id - The stanza id attribute to match. */ _addSysHandler: function (handler, ns, name, type, id) { var hand = new Strophe.Handler(handler, ns, name, type, id); hand.user = false; this.addHandlers.push(hand); return hand; }, /** PrivateFunction: _onDisconnectTimeout * _Private_ timeout handler for handling non-graceful disconnection. * * If the graceful disconnect process does not complete within the * time allotted, this handler finishes the disconnect anyway. * * Returns: * false to remove the handler. */ _onDisconnectTimeout: function () { Strophe.info("_onDisconnectTimeout was called"); // cancel all remaining requests and clear the queue var req; while (this._requests.length > 0) { req = this._requests.pop(); req.abort = true; req.xhr.abort(); // jslint complains, but this is fine. setting to empty func // is necessary for IE6 req.xhr.onreadystatechange = function () {}; } // actually disconnect this._doDisconnect(); return false; }, /** PrivateFunction: _onIdle * _Private_ handler to process events during idle cycle. * * This handler is called every 100ms to fire timed handlers that * are ready and keep poll requests going. */ _onIdle: function () { var i, thand, since, newList; // add timed handlers scheduled for addition // NOTE: we add before remove in the case a timed handler is // added and then deleted before the next _onIdle() call. while (this.addTimeds.length > 0) { this.timedHandlers.push(this.addTimeds.pop()); } // remove timed handlers that have been scheduled for deletion while (this.removeTimeds.length > 0) { thand = this.removeTimeds.pop(); i = this.timedHandlers.indexOf(thand); if (i >= 0) { this.timedHandlers.splice(i, 1); } } // call ready timed handlers var now = new Date().getTime(); newList = []; for (i = 0; i < this.timedHandlers.length; i++) { thand = this.timedHandlers[i]; if (this.authenticated || !thand.user) { since = thand.lastCalled + thand.period; if (since - now <= 0) { if (thand.run()) { newList.push(thand); } } else { newList.push(thand); } } } this.timedHandlers = newList; var body, time_elapsed; // if no requests are in progress, poll if (this.authenticated && this._requests.length === 0 && this._data.length === 0 && !this.disconnecting) { Strophe.info("no requests during idle cycle, sending " + "blank request"); this._data.push(null); } if (this._requests.length < 2 && this._data.length > 0 && !this.paused) { body = this._buildBody(); for (i = 0; i < this._data.length; i++) { if (this._data[i] !== null) { if (this._data[i] === "restart") { body.attrs({ to: this.domain, "xml:lang": "en", "xmpp:restart": "true", "xmlns:xmpp": Strophe.NS.BOSH }); } else { body.cnode(this._data[i]).up(); } } } delete this._data; this._data = []; this._requests.push( new Strophe.Request(body.tree(), this._onRequestStateChange.bind( this, this._dataRecv.bind(this)), body.tree().getAttribute("rid"))); this._processRequest(this._requests.length - 1); } if (this._requests.length > 0) { time_elapsed = this._requests[0].age(); if (this._requests[0].dead !== null) { if (this._requests[0].timeDead() > Math.floor(Strophe.SECONDARY_TIMEOUT * this.wait)) { this._throttledRequestHandler(); } } if (time_elapsed > Math.floor(Strophe.TIMEOUT * this.wait)) { Strophe.warn("Request " + this._requests[0].id + " timed out, over " + Math.floor(Strophe.TIMEOUT * this.wait) + " seconds since last activity"); this._throttledRequestHandler(); } } // reactivate the timer clearTimeout(this._idleTimeout); this._idleTimeout = setTimeout(this._onIdle.bind(this), 100); } }; if (callback) { callback(Strophe, $build, $msg, $iq, $pres); } })(function () { window.Strophe = arguments[0]; window.$build = arguments[1]; window.$msg = arguments[2]; window.$iq = arguments[3]; window.$pres = arguments[4]; }); synapse-0.24.0/contrib/jitsimeetbridge/unjingle/unjingle.js000066400000000000000000000023411317335640100240600ustar00rootroot00000000000000var strophe = require("./strophe/strophe.js").Strophe; var Strophe = strophe.Strophe; var $iq = strophe.$iq; var $msg = strophe.$msg; var $build = strophe.$build; var $pres = strophe.$pres; var jsdom = require("jsdom"); var window = jsdom.jsdom().parentWindow; var $ = require('jquery')(window); var stropheJingle = require("./strophe.jingle.sdp.js"); var input = ''; process.stdin.on('readable', function() { var chunk = process.stdin.read(); if (chunk !== null) { input += chunk; } }); process.stdin.on('end', function() { if (process.argv[2] == '--jingle') { var elem = $(input); // app does: // sess.setRemoteDescription($(iq).find('>jingle'), 'offer'); //console.log(elem.find('>content')); var sdp = new stropheJingle.SDP(''); sdp.fromJingle(elem); console.log(sdp.raw); } else if (process.argv[2] == '--sdp') { var sdp = new stropheJingle.SDP(input); var accept = $iq({to: '%(tojid)s', type: 'set'}) .c('jingle', {xmlns: 'urn:xmpp:jingle:1', //action: 'session-accept', action: '%(action)s', initiator: '%(initiator)s', responder: '%(responder)s', sid: '%(sid)s' }); sdp.toJingle(accept, 'responder'); console.log(Strophe.serialize(accept)); } }); synapse-0.24.0/contrib/prometheus/000077500000000000000000000000001317335640100171155ustar00rootroot00000000000000synapse-0.24.0/contrib/prometheus/README000066400000000000000000000010651317335640100177770ustar00rootroot00000000000000This directory contains some sample monitoring config for using the 'Prometheus' monitoring server against synapse. To use it, first install prometheus by following the instructions at http://prometheus.io/ Then add a new job to the main prometheus.conf file: job: { name: "synapse" target_group: { target: "http://SERVER.LOCATION.HERE:PORT/_synapse/metrics" } } Metrics are disabled by default when running synapse; they must be enabled with the 'enable-metrics' option, either in the synapse config file or as a command-line option. synapse-0.24.0/contrib/prometheus/consoles/000077500000000000000000000000001317335640100207425ustar00rootroot00000000000000synapse-0.24.0/contrib/prometheus/consoles/synapse.html000066400000000000000000000265271317335640100233260ustar00rootroot00000000000000{{ template "head" . }} {{ template "prom_content_head" . }}

System Resources

CPU

Memory

File descriptors

Reactor

Total reactor time

Average reactor tick time

Pending calls per tick

Storage

Queries

Transactions

Transaction execution time

Database scheduling latency

Cache hit ratio

Cache size

Requests

Requests by Servlet

 (without EventStreamRestServlet or SyncRestServlet)

Average response times

All responses by code

Error responses by code

CPU Usage

DB Usage

Average event send times

Federation

Sent Messages

Received Messages

Pending

Clients

Notifiers

Notified Events

{{ template "prom_content_tail" . }} {{ template "tail" }} synapse-0.24.0/contrib/prometheus/synapse.rules000066400000000000000000000032321317335640100216530ustar00rootroot00000000000000synapse_federation_transaction_queue_pendingEdus:total = sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0) synapse_federation_transaction_queue_pendingPdus:total = sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0) synapse_http_server_requests:method{servlet=""} = sum(synapse_http_server_requests) by (method) synapse_http_server_requests:servlet{method=""} = sum(synapse_http_server_requests) by (servlet) synapse_http_server_requests:total{servlet=""} = sum(synapse_http_server_requests:by_method) by (servlet) synapse_cache:hit_ratio_5m = rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m]) synapse_cache:hit_ratio_30s = rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s]) synapse_federation_client_sent{type="EDU"} = synapse_federation_client_sent_edus + 0 synapse_federation_client_sent{type="PDU"} = synapse_federation_client_sent_pdu_destinations:count + 0 synapse_federation_client_sent{type="Query"} = sum(synapse_federation_client_sent_queries) by (job) synapse_federation_server_received{type="EDU"} = synapse_federation_server_received_edus + 0 synapse_federation_server_received{type="PDU"} = synapse_federation_server_received_pdus + 0 synapse_federation_server_received{type="Query"} = sum(synapse_federation_server_received_queries) by (job) synapse_federation_transaction_queue_pending{type="EDU"} = synapse_federation_transaction_queue_pending_edus + 0 synapse_federation_transaction_queue_pending{type="PDU"} = synapse_federation_transaction_queue_pending_pdus + 0 synapse-0.24.0/contrib/scripts/000077500000000000000000000000001317335640100164115ustar00rootroot00000000000000synapse-0.24.0/contrib/scripts/kick_users.py000077500000000000000000000063561317335640100211420ustar00rootroot00000000000000#!/usr/bin/env python from argparse import ArgumentParser import json import requests import sys import urllib def _mkurl(template, kws): for key in kws: template = template.replace(key, kws[key]) return template def main(hs, room_id, access_token, user_id_prefix, why): if not why: why = "Automated kick." print "Kicking members on %s in room %s matching %s" % (hs, room_id, user_id_prefix) room_state_url = _mkurl( "$HS/_matrix/client/api/v1/rooms/$ROOM/state?access_token=$TOKEN", { "$HS": hs, "$ROOM": room_id, "$TOKEN": access_token } ) print "Getting room state => %s" % room_state_url res = requests.get(room_state_url) print "HTTP %s" % res.status_code state_events = res.json() if "error" in state_events: print "FATAL" print state_events return kick_list = [] room_name = room_id for event in state_events: if not event["type"] == "m.room.member": if event["type"] == "m.room.name": room_name = event["content"].get("name") continue if not event["content"].get("membership") == "join": continue if event["state_key"].startswith(user_id_prefix): kick_list.append(event["state_key"]) if len(kick_list) == 0: print "No user IDs match the prefix '%s'" % user_id_prefix return print "The following user IDs will be kicked from %s" % room_name for uid in kick_list: print uid doit = raw_input("Continue? [Y]es\n") if len(doit) > 0 and doit.lower() == 'y': print "Kicking members..." # encode them all kick_list = [urllib.quote(uid) for uid in kick_list] for uid in kick_list: kick_url = _mkurl( "$HS/_matrix/client/api/v1/rooms/$ROOM/state/m.room.member/$UID?access_token=$TOKEN", { "$HS": hs, "$UID": uid, "$ROOM": room_id, "$TOKEN": access_token } ) kick_body = { "membership": "leave", "reason": why } print "Kicking %s" % uid res = requests.put(kick_url, data=json.dumps(kick_body)) if res.status_code != 200: print "ERROR: HTTP %s" % res.status_code if res.json().get("error"): print "ERROR: JSON %s" % res.json() if __name__ == "__main__": parser = ArgumentParser("Kick members in a room matching a certain user ID prefix.") parser.add_argument("-u","--user-id",help="The user ID prefix e.g. '@irc_'") parser.add_argument("-t","--token",help="Your access_token") parser.add_argument("-r","--room",help="The room ID to kick members in") parser.add_argument("-s","--homeserver",help="The base HS url e.g. http://matrix.org") parser.add_argument("-w","--why",help="Reason for the kick. Optional.") args = parser.parse_args() if not args.room or not args.token or not args.user_id or not args.homeserver: parser.print_help() sys.exit(1) else: main(args.homeserver, args.room, args.token, args.user_id, args.why) synapse-0.24.0/contrib/systemd/000077500000000000000000000000001317335640100164125ustar00rootroot00000000000000synapse-0.24.0/contrib/systemd/log_config.yaml000066400000000000000000000010271317335640100214040ustar00rootroot00000000000000version: 1 # In systemd's journal, loglevel is implicitly stored, so let's omit it # from the message text. formatters: journal_fmt: format: '%(name)s: [%(request)s] %(message)s' filters: context: (): synapse.util.logcontext.LoggingContextFilter request: "" handlers: journal: class: systemd.journal.JournalHandler formatter: journal_fmt filters: [context] SYSLOG_IDENTIFIER: synapse root: level: INFO handlers: [journal] disable_existing_loggers: False synapse-0.24.0/contrib/systemd/synapse.service000066400000000000000000000010451317335640100214560ustar00rootroot00000000000000# This assumes that Synapse has been installed as a system package # (e.g. https://www.archlinux.org/packages/community/any/matrix-synapse/ for ArchLinux) # rather than in a user home directory or similar under virtualenv. [Unit] Description=Synapse Matrix homeserver [Service] Type=simple User=synapse Group=synapse WorkingDirectory=/var/lib/synapse ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml ExecStop=/usr/bin/synctl stop /etc/synapse/homeserver.yaml [Install] WantedBy=multi-user.target synapse-0.24.0/contrib/vertobot/000077500000000000000000000000001317335640100165665ustar00rootroot00000000000000synapse-0.24.0/contrib/vertobot/.gitignore000066400000000000000000000000321317335640100205510ustar00rootroot00000000000000vucbot.yaml vertobot.yaml synapse-0.24.0/contrib/vertobot/bot.pl000077500000000000000000000224331317335640100177160ustar00rootroot00000000000000#!/usr/bin/env perl use strict; use warnings; use 5.010; # // use IO::Socket::SSL qw(SSL_VERIFY_NONE); use IO::Async::Loop; use Net::Async::WebSocket::Client; use Net::Async::Matrix 0.11_002; use JSON; use YAML; use Data::UUID; use Getopt::Long; use Data::Dumper; binmode STDOUT, ":encoding(UTF-8)"; binmode STDERR, ":encoding(UTF-8)"; my $loop = IO::Async::Loop->new; # Net::Async::HTTP + SSL + IO::Poll doesn't play well. See # https://rt.cpan.org/Ticket/Display.html?id=93107 ref $loop eq "IO::Async::Loop::Poll" and warn "Using SSL with IO::Poll causes known memory-leaks!!\n"; GetOptions( 'C|config=s' => \my $CONFIG, 'eval-from=s' => \my $EVAL_FROM, ) or exit 1; if( defined $EVAL_FROM ) { # An emergency 'eval() this file' hack $SIG{HUP} = sub { my $code = do { open my $fh, "<", $EVAL_FROM or warn( "Cannot read - $!" ), return; local $/; <$fh> }; eval $code or warn "Cannot eval() - $@"; }; } defined $CONFIG or die "Must supply --config\n"; my %CONFIG = %{ YAML::LoadFile( $CONFIG ) }; my %MATRIX_CONFIG = %{ $CONFIG{matrix} }; # No harm in always applying this $MATRIX_CONFIG{SSL_verify_mode} = SSL_VERIFY_NONE; # Track every Room object, so we can ->leave them all on shutdown my %bot_matrix_rooms; my $bridgestate = {}; my $roomid_by_callid = {}; my $bot_verto = Net::Async::WebSocket::Client->new( on_frame => sub { my ( $self, $frame ) = @_; warn "[Verto] receiving $frame"; on_verto_json($frame); }, ); $loop->add( $bot_verto ); my $sessid = lc new Data::UUID->create_str(); my $bot_matrix = Net::Async::Matrix->new( %MATRIX_CONFIG, on_log => sub { warn "log: @_\n" }, on_invite => sub { my ($matrix, $invite) = @_; warn "[Matrix] invited to: " . $invite->{room_id} . " by " . $invite->{inviter} . "\n"; $matrix->join_room( $invite->{room_id} )->get; }, on_room_new => sub { my ($matrix, $room) = @_; warn "[Matrix] have a room ID: " . $room->room_id . "\n"; $bot_matrix_rooms{$room->room_id} = $room; # log in to verto on behalf of this room $bridgestate->{$room->room_id}->{sessid} = $sessid; $room->configure( on_message => \&on_room_message, ); my $f = send_verto_json_request("login", { 'login' => $CONFIG{'verto-dialog-params'}{'login'}, 'passwd' => $CONFIG{'verto-config'}{'passwd'}, 'sessid' => $sessid, }); $matrix->adopt_future($f); # we deliberately don't paginate the room, as we only care about # new calls }, on_unknown_event => \&on_unknown_event, on_error => sub { print STDERR "Matrix failure: @_\n"; }, ); $loop->add( $bot_matrix ); sub on_unknown_event { my ($matrix, $event) = @_; print Dumper($event); my $room_id = $event->{room_id}; my %dp = %{$CONFIG{'verto-dialog-params'}}; $dp{callID} = $bridgestate->{$room_id}->{callid}; if ($event->{type} eq 'm.call.invite') { $bridgestate->{$room_id}->{matrix_callid} = $event->{content}->{call_id}; $bridgestate->{$room_id}->{callid} = lc new Data::UUID->create_str(); $bridgestate->{$room_id}->{offer} = $event->{content}->{offer}->{sdp}; $bridgestate->{$room_id}->{gathered_candidates} = 0; $roomid_by_callid->{ $bridgestate->{$room_id}->{callid} } = $room_id; # no trickle ICE in verto apparently } elsif ($event->{type} eq 'm.call.candidates') { # XXX: compare call IDs if (!$bridgestate->{$room_id}->{gathered_candidates}) { $bridgestate->{$room_id}->{gathered_candidates} = 1; my $offer = $bridgestate->{$room_id}->{offer}; my $candidate_block = { audio => '', video => '', }; foreach (@{$event->{content}->{candidates}}) { if ($_->{sdpMid}) { $candidate_block->{$_->{sdpMid}} .= "a=" . $_->{candidate} . "\r\n"; } else { $candidate_block->{audio} .= "a=" . $_->{candidate} . "\r\n"; $candidate_block->{video} .= "a=" . $_->{candidate} . "\r\n"; } } # XXX: assumes audio comes first #$offer =~ s/(a=rtcp-mux[\r\n]+)/$1$candidate_block->{audio}/; #$offer =~ s/(a=rtcp-mux[\r\n]+)/$1$candidate_block->{video}/; $offer =~ s/(m=video)/$candidate_block->{audio}$1/; $offer =~ s/(.$)/$1\n$candidate_block->{video}$1/; my $f = send_verto_json_request("verto.invite", { "sdp" => $offer, "dialogParams" => \%dp, "sessid" => $bridgestate->{$room_id}->{sessid}, }); $matrix->adopt_future($f); } else { # ignore them, as no trickle ICE, although we might as well # batch them up # foreach (@{$event->{content}->{candidates}}) { # push @{$bridgestate->{$room_id}->{candidates}}, $_; # } } } elsif ($event->{type} eq 'm.call.hangup') { if ($bridgestate->{$room_id}->{matrix_callid} eq $event->{content}->{call_id}) { my $f = send_verto_json_request("verto.bye", { "dialogParams" => \%dp, "sessid" => $bridgestate->{$room_id}->{sessid}, }); $matrix->adopt_future($f); } else { warn "Ignoring unrecognised callid: ".$event->{content}->{call_id}; } } else { warn "Unhandled event: $event->{type}"; } } sub on_room_message { my ($room, $from, $content) = @_; my $room_id = $room->room_id; warn "[Matrix] in $room_id: $from: " . $content->{body} . "\n"; } Future->needs_all( $bot_matrix->login( %{ $CONFIG{"matrix-bot"} } )->then( sub { $bot_matrix->start; }), $bot_verto->connect( %{ $CONFIG{"verto-bot"} }, on_connect_error => sub { die "Cannot connect to verto - $_[-1]" }, on_resolve_error => sub { die "Cannot resolve to verto - $_[-1]" }, )->on_done( sub { warn("[Verto] connected to websocket"); }), )->get; $loop->attach_signal( PIPE => sub { warn "pipe\n" } ); $loop->attach_signal( INT => sub { $loop->stop }, ); $loop->attach_signal( TERM => sub { $loop->stop }, ); eval { $loop->run; } or my $e = $@; # When the bot gets shut down, have it leave the rooms so it's clear to observers # that it is no longer running. # if( $CONFIG{"leave-on-shutdown"} // 1 ) { # print STDERR "Removing bot from Matrix rooms...\n"; # Future->wait_all( map { $_->leave->else_done() } values %bot_matrix_rooms )->get; # } # else { # print STDERR "Leaving bot users in Matrix rooms.\n"; # } die $e if $e; exit 0; { my $json_id; my $requests; sub send_verto_json_request { $json_id ||= 1; my ($method, $params) = @_; my $json = { jsonrpc => "2.0", method => $method, params => $params, id => $json_id, }; my $text = JSON->new->encode( $json ); warn "[Verto] sending $text"; $bot_verto->send_frame ( $text ); my $request = $loop->new_future; $requests->{$json_id} = $request; $json_id++; return $request; } sub send_verto_json_response { my ($result, $id) = @_; my $json = { jsonrpc => "2.0", result => $result, id => $id, }; my $text = JSON->new->encode( $json ); warn "[Verto] sending $text"; $bot_verto->send_frame ( $text ); } sub on_verto_json { my $json = JSON->new->decode( $_[0] ); if ($json->{method}) { if (($json->{method} eq 'verto.answer' && $json->{params}->{sdp}) || $json->{method} eq 'verto.media') { my $room_id = $roomid_by_callid->{$json->{params}->{callID}}; my $room = $bot_matrix_rooms{$room_id}; if ($json->{params}->{sdp}) { # HACK HACK HACK HACK $room->_do_POST_json( "/send/m.call.answer", { call_id => $bridgestate->{$room_id}->{matrix_callid}, version => 0, answer => { sdp => $json->{params}->{sdp}, type => "answer", }, })->then( sub { send_verto_json_response( { method => $json->{method}, }, $json->{id}); })->get; } } else { warn ("[Verto] unhandled method: " . $json->{method}); send_verto_json_response( { method => $json->{method}, }, $json->{id}); } } elsif ($json->{result}) { $requests->{$json->{id}}->done($json->{result}); } elsif ($json->{error}) { $requests->{$json->{id}}->fail($json->{error}->{message}, $json->{error}); } } } synapse-0.24.0/contrib/vertobot/bridge.pl000077500000000000000000000436171317335640100203750ustar00rootroot00000000000000#!/usr/bin/env perl use strict; use warnings; use 5.010; # // use IO::Socket::SSL qw(SSL_VERIFY_NONE); use IO::Async::Loop; use Net::Async::WebSocket::Client; use Net::Async::HTTP; use Net::Async::HTTP::Server; use JSON; use YAML; use Data::UUID; use Getopt::Long; use Data::Dumper; use URI::Encode qw(uri_encode uri_decode); binmode STDOUT, ":encoding(UTF-8)"; binmode STDERR, ":encoding(UTF-8)"; my $msisdn_to_matrix = { '447417892400' => '@matthew:matrix.org', }; my $matrix_to_msisdn = {}; foreach (keys %$msisdn_to_matrix) { $matrix_to_msisdn->{$msisdn_to_matrix->{$_}} = $_; } my $loop = IO::Async::Loop->new; # Net::Async::HTTP + SSL + IO::Poll doesn't play well. See # https://rt.cpan.org/Ticket/Display.html?id=93107 # ref $loop eq "IO::Async::Loop::Poll" and # warn "Using SSL with IO::Poll causes known memory-leaks!!\n"; GetOptions( 'C|config=s' => \my $CONFIG, 'eval-from=s' => \my $EVAL_FROM, ) or exit 1; if( defined $EVAL_FROM ) { # An emergency 'eval() this file' hack $SIG{HUP} = sub { my $code = do { open my $fh, "<", $EVAL_FROM or warn( "Cannot read - $!" ), return; local $/; <$fh> }; eval $code or warn "Cannot eval() - $@"; }; } defined $CONFIG or die "Must supply --config\n"; my %CONFIG = %{ YAML::LoadFile( $CONFIG ) }; my %MATRIX_CONFIG = %{ $CONFIG{matrix} }; # No harm in always applying this $MATRIX_CONFIG{SSL_verify_mode} = SSL_VERIFY_NONE; my $bridgestate = {}; my $roomid_by_callid = {}; my $sessid = lc new Data::UUID->create_str(); my $as_token = $CONFIG{"matrix-bot"}->{as_token}; my $hs_domain = $CONFIG{"matrix-bot"}->{domain}; my $http = Net::Async::HTTP->new(); $loop->add( $http ); sub create_virtual_user { my ($localpart) = @_; my ( $response ) = $http->do_request( method => "POST", uri => URI->new( $CONFIG{"matrix"}->{server}. "/_matrix/client/api/v1/register?". "access_token=$as_token&user_id=$localpart" ), content_type => "application/json", content => <get; warn $response->as_string if ($response->code != 200); } my $http_server = Net::Async::HTTP::Server->new( on_request => sub { my $self = shift; my ( $req ) = @_; my $response; my $path = uri_decode($req->path); warn("request: $path"); if ($path =~ m#/users/\@(\+.*)#) { # when queried about virtual users, auto-create them in the HS my $localpart = $1; create_virtual_user($localpart); $response = HTTP::Response->new( 200 ); $response->add_content('{}'); $response->content_type( "application/json" ); } elsif ($path =~ m#/transactions/(.*)#) { my $event = JSON->new->decode($req->body); print Dumper($event); my $room_id = $event->{room_id}; my %dp = %{$CONFIG{'verto-dialog-params'}}; $dp{callID} = $bridgestate->{$room_id}->{callid}; if ($event->{type} eq 'm.room.membership') { my $membership = $event->{content}->{membership}; my $state_key = $event->{state_key}; my $room_id = $event->{state_id}; if ($membership eq 'invite') { # autojoin invites my ( $response ) = $http->do_request( method => "POST", uri => URI->new( $CONFIG{"matrix"}->{server}. "/_matrix/client/api/v1/rooms/$room_id/join?". "access_token=$as_token&user_id=$state_key" ), content_type => "application/json", content => "{}", )->get; warn $response->as_string if ($response->code != 200); } } elsif ($event->{type} eq 'm.call.invite') { my $room_id = $event->{room_id}; $bridgestate->{$room_id}->{matrix_callid} = $event->{content}->{call_id}; $bridgestate->{$room_id}->{callid} = lc new Data::UUID->create_str(); $bridgestate->{$room_id}->{sessid} = $sessid; # $bridgestate->{$room_id}->{offer} = $event->{content}->{offer}->{sdp}; my $offer = $event->{content}->{offer}->{sdp}; # $bridgestate->{$room_id}->{gathered_candidates} = 0; $roomid_by_callid->{ $bridgestate->{$room_id}->{callid} } = $room_id; # no trickle ICE in verto apparently my $f = send_verto_json_request("verto.invite", { "sdp" => $offer, "dialogParams" => \%dp, "sessid" => $bridgestate->{$room_id}->{sessid}, }); $self->adopt_future($f); } # elsif ($event->{type} eq 'm.call.candidates') { # # XXX: this could fire for both matrix->verto and verto->matrix calls # # and races as it collects candidates. much better to just turn off # # candidate gathering in the webclient entirely for now # # my $room_id = $event->{room_id}; # # XXX: compare call IDs # if (!$bridgestate->{$room_id}->{gathered_candidates}) { # $bridgestate->{$room_id}->{gathered_candidates} = 1; # my $offer = $bridgestate->{$room_id}->{offer}; # my $candidate_block = ""; # foreach (@{$event->{content}->{candidates}}) { # $candidate_block .= "a=" . $_->{candidate} . "\r\n"; # } # # XXX: collate using the right m= line - for now assume audio call # $offer =~ s/(a=rtcp.*[\r\n]+)/$1$candidate_block/; # # my $f = send_verto_json_request("verto.invite", { # "sdp" => $offer, # "dialogParams" => \%dp, # "sessid" => $bridgestate->{$room_id}->{sessid}, # }); # $self->adopt_future($f); # } # else { # # ignore them, as no trickle ICE, although we might as well # # batch them up # # foreach (@{$event->{content}->{candidates}}) { # # push @{$bridgestate->{$room_id}->{candidates}}, $_; # # } # } # } elsif ($event->{type} eq 'm.call.answer') { # grab the answer and relay it to verto as a verto.answer my $room_id = $event->{room_id}; my $answer = $event->{content}->{answer}->{sdp}; my $f = send_verto_json_request("verto.answer", { "sdp" => $answer, "dialogParams" => \%dp, "sessid" => $bridgestate->{$room_id}->{sessid}, }); $self->adopt_future($f); } elsif ($event->{type} eq 'm.call.hangup') { my $room_id = $event->{room_id}; if ($bridgestate->{$room_id}->{matrix_callid} eq $event->{content}->{call_id}) { my $f = send_verto_json_request("verto.bye", { "dialogParams" => \%dp, "sessid" => $bridgestate->{$room_id}->{sessid}, }); $self->adopt_future($f); } else { warn "Ignoring unrecognised callid: ".$event->{content}->{call_id}; } } else { warn "Unhandled event: $event->{type}"; } $response = HTTP::Response->new( 200 ); $response->add_content('{}'); $response->content_type( "application/json" ); } else { warn "Unhandled path: $path"; $response = HTTP::Response->new( 404 ); } $req->respond( $response ); }, ); $loop->add( $http_server ); $http_server->listen( addr => { family => "inet", socktype => "stream", port => 8009 }, on_listen_error => sub { die "Cannot listen - $_[-1]\n" }, ); my $bot_verto = Net::Async::WebSocket::Client->new( on_frame => sub { my ( $self, $frame ) = @_; warn "[Verto] receiving $frame"; on_verto_json($frame); }, ); $loop->add( $bot_verto ); my $verto_connecting = $loop->new_future; $bot_verto->connect( %{ $CONFIG{"verto-bot"} }, on_connected => sub { warn("[Verto] connected to websocket"); if (not $verto_connecting->is_done) { $verto_connecting->done($bot_verto); send_verto_json_request("login", { 'login' => $CONFIG{'verto-dialog-params'}{'login'}, 'passwd' => $CONFIG{'verto-config'}{'passwd'}, 'sessid' => $sessid, }); } }, on_connect_error => sub { die "Cannot connect to verto - $_[-1]" }, on_resolve_error => sub { die "Cannot resolve to verto - $_[-1]" }, ); # die Dumper($verto_connecting); my $as_url = $CONFIG{"matrix-bot"}->{as_url}; Future->needs_all( $http->do_request( method => "POST", uri => URI->new( $CONFIG{"matrix"}->{server}."/_matrix/appservice/v1/register" ), content_type => "application/json", content => <then( sub{ my ($response) = (@_); warn $response->as_string if ($response->code != 200); return Future->done; }), $verto_connecting, )->get; $loop->attach_signal( PIPE => sub { warn "pipe\n" } ); $loop->attach_signal( INT => sub { $loop->stop }, ); $loop->attach_signal( TERM => sub { $loop->stop }, ); eval { $loop->run; } or my $e = $@; die $e if $e; exit 0; { my $json_id; my $requests; sub send_verto_json_request { $json_id ||= 1; my ($method, $params) = @_; my $json = { jsonrpc => "2.0", method => $method, params => $params, id => $json_id, }; my $text = JSON->new->encode( $json ); warn "[Verto] sending $text"; $bot_verto->send_frame ( $text ); my $request = $loop->new_future; $requests->{$json_id} = $request; $json_id++; return $request; } sub send_verto_json_response { my ($result, $id) = @_; my $json = { jsonrpc => "2.0", result => $result, id => $id, }; my $text = JSON->new->encode( $json ); warn "[Verto] sending $text"; $bot_verto->send_frame ( $text ); } sub on_verto_json { my $json = JSON->new->decode( $_[0] ); if ($json->{method}) { if (($json->{method} eq 'verto.answer' && $json->{params}->{sdp}) || $json->{method} eq 'verto.media') { my $caller = $json->{dialogParams}->{caller_id_number}; my $callee = $json->{dialogParams}->{destination_number}; my $caller_user = '@+' . $caller . ':' . $hs_domain; my $callee_user = $msisdn_to_matrix->{$callee} || warn "unrecogised callee: $callee"; my $room_id = $roomid_by_callid->{$json->{params}->{callID}}; if ($json->{params}->{sdp}) { $http->do_request( method => "POST", uri => URI->new( $CONFIG{"matrix"}->{server}. "/_matrix/client/api/v1/send/m.call.answer?". "access_token=$as_token&user_id=$caller_user" ), content_type => "application/json", content => JSON->new->encode({ call_id => $bridgestate->{$room_id}->{matrix_callid}, version => 0, answer => { sdp => $json->{params}->{sdp}, type => "answer", }, }), )->then( sub { send_verto_json_response( { method => $json->{method}, }, $json->{id}); })->get; } } elsif ($json->{method} eq 'verto.invite') { my $caller = $json->{dialogParams}->{caller_id_number}; my $callee = $json->{dialogParams}->{destination_number}; my $caller_user = '@+' . $caller . ':' . $hs_domain; my $callee_user = $msisdn_to_matrix->{$callee} || warn "unrecogised callee: $callee"; my $alias = ($caller lt $callee) ? ($caller.'-'.$callee) : ($callee.'-'.$caller); my $room_id; # create a virtual user for the caller if needed. create_virtual_user($caller); # create a room of form #peer-peer and invite the callee $http->do_request( method => "POST", uri => URI->new( $CONFIG{"matrix"}->{server}. "/_matrix/client/api/v1/createRoom?". "access_token=$as_token&user_id=$caller_user" ), content_type => "application/json", content => JSON->new->encode({ room_alias_name => $alias, invite => [ $callee_user ], }), )->then( sub { my ( $response ) = @_; my $resp = JSON->new->decode($response->content); $room_id = $resp->{room_id}; $roomid_by_callid->{$json->{params}->{callID}} = $room_id; })->get; # join it my ($response) = $http->do_request( method => "POST", uri => URI->new( $CONFIG{"matrix"}->{server}. "/_matrix/client/api/v1/join/$room_id?". "access_token=$as_token&user_id=$caller_user" ), content_type => "application/json", content => '{}', )->get; $bridgestate->{$room_id}->{matrix_callid} = lc new Data::UUID->create_str(); $bridgestate->{$room_id}->{callid} = $json->{dialogParams}->{callID}; $bridgestate->{$room_id}->{sessid} = $sessid; # put the m.call.invite in there $http->do_request( method => "POST", uri => URI->new( $CONFIG{"matrix"}->{server}. "/_matrix/client/api/v1/send/m.call.invite?". "access_token=$as_token&user_id=$caller_user" ), content_type => "application/json", content => JSON->new->encode({ call_id => $bridgestate->{$room_id}->{matrix_callid}, version => 0, answer => { sdp => $json->{params}->{sdp}, type => "offer", }, }), )->then( sub { # acknowledge the verto send_verto_json_response( { method => $json->{method}, }, $json->{id}); })->get; } elsif ($json->{method} eq 'verto.bye') { my $caller = $json->{dialogParams}->{caller_id_number}; my $callee = $json->{dialogParams}->{destination_number}; my $caller_user = '@+' . $caller . ':' . $hs_domain; my $callee_user = $msisdn_to_matrix->{$callee} || warn "unrecogised callee: $callee"; my $room_id = $roomid_by_callid->{$json->{params}->{callID}}; # put the m.call.hangup into the room $http->do_request( method => "POST", uri => URI->new( $CONFIG{"matrix"}->{server}. "/_matrix/client/api/v1/send/m.call.hangup?". "access_token=$as_token&user_id=$caller_user" ), content_type => "application/json", content => JSON->new->encode({ call_id => $bridgestate->{$room_id}->{matrix_callid}, version => 0, }), )->then( sub { # acknowledge the verto send_verto_json_response( { method => $json->{method}, }, $json->{id}); })->get; } else { warn ("[Verto] unhandled method: " . $json->{method}); send_verto_json_response( { method => $json->{method}, }, $json->{id}); } } elsif ($json->{result}) { $requests->{$json->{id}}->done($json->{result}); } elsif ($json->{error}) { $requests->{$json->{id}}->fail($json->{error}->{message}, $json->{error}); } } } synapse-0.24.0/contrib/vertobot/config.yaml000066400000000000000000000012431317335640100207170ustar00rootroot00000000000000# Generic Matrix connection params matrix: server: 'matrix.org' SSL: 1 # Bot-user connection details matrix-bot: user_id: '@vertobot:matrix.org' password: '' domain: 'matrix.org" as_url: 'http://localhost:8009' as_token: 'vertobot123' verto-bot: host: webrtc.freeswitch.org service: 8081 url: "ws://webrtc.freeswitch.org:8081/" verto-config: passwd: 1234 verto-dialog-params: useVideo: false useStereo: false tag: "webcam" login: "1008@webrtc.freeswitch.org" destination_number: "9664" caller_id_name: "FreeSWITCH User" caller_id_number: "1008" callID: "" remote_caller_id_name: "Outbound Call" remote_caller_id_number: "9664" synapse-0.24.0/contrib/vertobot/cpanfile000066400000000000000000000005651317335640100203000ustar00rootroot00000000000000requires 'parent', 0; requires 'Future', '>= 0.29'; requires 'Net::Async::Matrix', '>= 0.11_002'; requires 'Net::Async::Matrix::Utils'; requires 'Net::Async::WebSocket::Protocol', 0; requires 'Data::UUID', 0; requires 'IO::Async', '>= 0.63'; requires 'IO::Async::SSL', 0; requires 'IO::Socket::SSL', 0; requires 'YAML', 0; requires 'JSON', 0; requires 'Getopt::Long', 0; synapse-0.24.0/contrib/vertobot/verto-example.json000066400000000000000000000172551317335640100222630ustar00rootroot00000000000000# JSON is shown in *reverse* chronological order. # Send v. Receive is implicit. { "jsonrpc": "2.0", "id": 7, "result": { "callID": "12795aa6-2a8d-84ee-ce63-2e82ffe825ef", "message": "CALL ENDED", "causeCode": 16, "cause": "NORMAL_CLEARING", "sessid": "03a11060-3e14-23b6-c620-51b892c52983" } } { "jsonrpc": "2.0", "method": "verto.bye", "params": { "dialogParams": { "useVideo": false, "useStereo": true, "tag": "webcam", "login": "1008@webrtc.freeswitch.org", "destination_number": "9664", "caller_id_name": "FreeSWITCH User", "caller_id_number": "1008", "callID": "12795aa6-2a8d-84ee-ce63-2e82ffe825ef", "remote_caller_id_name": "Outbound Call", "remote_caller_id_number": "9664" }, "sessid": "03a11060-3e14-23b6-c620-51b892c52983" }, "id": 7 } { "jsonrpc": "2.0", "id": 6, "result": { "callID": "12795aa6-2a8d-84ee-ce63-2e82ffe825ef", "action": "toggleHold", "holdState": "active", "sessid": "03a11060-3e14-23b6-c620-51b892c52983" } } { "jsonrpc": "2.0", "method": "verto.modify", "params": { "action": "toggleHold", "dialogParams": { "useVideo": false, "useStereo": true, "tag": "webcam", "login": "1008@webrtc.freeswitch.org", "destination_number": "9664", "caller_id_name": "FreeSWITCH User", "caller_id_number": "1008", "callID": "12795aa6-2a8d-84ee-ce63-2e82ffe825ef", "remote_caller_id_name": "Outbound Call", "remote_caller_id_number": "9664" }, "sessid": "03a11060-3e14-23b6-c620-51b892c52983" }, "id": 6 } { "jsonrpc": "2.0", "id": 5, "result": { "callID": "12795aa6-2a8d-84ee-ce63-2e82ffe825ef", "action": "toggleHold", "holdState": "held", "sessid": "03a11060-3e14-23b6-c620-51b892c52983" } } { "jsonrpc": "2.0", "method": "verto.modify", "params": { "action": "toggleHold", "dialogParams": { "useVideo": false, "useStereo": true, "tag": "webcam", "login": "1008@webrtc.freeswitch.org", "destination_number": "9664", "caller_id_name": "FreeSWITCH User", "caller_id_number": "1008", "callID": "12795aa6-2a8d-84ee-ce63-2e82ffe825ef", "remote_caller_id_name": "Outbound Call", "remote_caller_id_number": "9664" }, "sessid": "03a11060-3e14-23b6-c620-51b892c52983" }, "id": 5 } { "jsonrpc": "2.0", "id": 349819, "result": { "method": "verto.answer" } } { "jsonrpc": "2.0", "id": 349819, "method": "verto.answer", "params": { "callID": "12795aa6-2a8d-84ee-ce63-2e82ffe825ef", "sdp": "v=0\no=FreeSWITCH 1417101432 1417101433 IN IP4 209.105.235.10\ns=FreeSWITCH\nc=IN IP4 209.105.235.10\nt=0 0\na=msid-semantic: WMS jA3rmwLVwUq1iE6TYEYHeLk2YTUlh1Vq\nm=audio 30134 RTP/SAVPF 111 126\na=rtpmap:111 opus/48000/2\na=fmtp:111 minptime=10; stereo=1\na=rtpmap:126 telephone-event/8000\na=silenceSupp:off - - - -\na=ptime:20\na=sendrecv\na=fingerprint:sha-256 F8:72:18:E9:72:89:99:22:5B:F8:B6:C6:C6:0D:C5:9B:B2:FB:BC:CA:8D:AB:13:8A:66:E1:37:38:A0:16:AA:41\na=rtcp-mux\na=rtcp:30134 IN IP4 209.105.235.10\na=ssrc:210967934 cname:rOIEajpw4FocakWY\na=ssrc:210967934 msid:jA3rmwLVwUq1iE6TYEYHeLk2YTUlh1Vq a0\na=ssrc:210967934 mslabel:jA3rmwLVwUq1iE6TYEYHeLk2YTUlh1Vq\na=ssrc:210967934 label:jA3rmwLVwUq1iE6TYEYHeLk2YTUlh1Vqa0\na=ice-ufrag:OKwTmGLapwmxn7OF\na=ice-pwd:MmaMwq8rVmtWxfLbQ7U2Ew3T\na=candidate:2372654928 1 udp 659136 209.105.235.10 30134 typ host generation 0\n" } } { "jsonrpc": "2.0", "id": 4, "result": { "message": "CALL CREATED", "callID": "12795aa6-2a8d-84ee-ce63-2e82ffe825ef", "sessid": "03a11060-3e14-23b6-c620-51b892c52983" } } { "jsonrpc": "2.0", "method": "verto.invite", "params": { "sdp": "v=0\r\no=- 1381685806032722557 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE audio\r\na=msid-semantic: WMS 6OOMyGAyJakjwaOOBtV7WcBCCuIW6PpuXsNg\r\nm=audio 63088 RTP/SAVPF 111 103 104 0 8 106 105 13 126\r\nc=IN IP4 81.138.8.249\r\na=rtcp:63088 IN IP4 81.138.8.249\r\na=candidate:460398169 1 udp 2122260223 10.10.79.10 49945 typ host generation 0\r\na=candidate:460398169 2 udp 2122260223 10.10.79.10 49945 typ host generation 0\r\na=candidate:3460887983 1 udp 2122194687 192.168.1.64 63088 typ host generation 0\r\na=candidate:3460887983 2 udp 2122194687 192.168.1.64 63088 typ host generation 0\r\na=candidate:945327227 1 udp 1685987071 81.138.8.249 63088 typ srflx raddr 192.168.1.64 rport 63088 generation 0\r\na=candidate:945327227 2 udp 1685987071 81.138.8.249 63088 typ srflx raddr 192.168.1.64 rport 63088 generation 0\r\na=candidate:1441981097 1 tcp 1518280447 10.10.79.10 0 typ host tcptype active generation 0\r\na=candidate:1441981097 2 tcp 1518280447 10.10.79.10 0 typ host tcptype active generation 0\r\na=candidate:2160789855 1 tcp 1518214911 192.168.1.64 0 typ host tcptype active generation 0\r\na=candidate:2160789855 2 tcp 1518214911 192.168.1.64 0 typ host tcptype active generation 0\r\na=ice-ufrag:cP4qeRhn0LpcpA88\r\na=ice-pwd:fREmgSkXsDLGUUH1bwfrBQhW\r\na=ice-options:google-ice\r\na=fingerprint:sha-256 AF:35:64:1B:62:8A:EF:27:AE:2B:88:2E:FE:78:29:0B:08:DA:64:6C:DE:02:57:E3:EE:B1:D7:86:B8:36:8F:B0\r\na=setup:actpass\r\na=mid:audio\r\na=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level\r\na=extmap:3 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time\r\na=sendrecv\r\na=rtcp-mux\r\na=rtpmap:111 opus/48000/2\r\na=fmtp:111 minptime=10; stereo=1\r\na=rtpmap:103 ISAC/16000\r\na=rtpmap:104 ISAC/32000\r\na=rtpmap:0 PCMU/8000\r\na=rtpmap:8 PCMA/8000\r\na=rtpmap:106 CN/32000\r\na=rtpmap:105 CN/16000\r\na=rtpmap:13 CN/8000\r\na=rtpmap:126 telephone-event/8000\r\na=maxptime:60\r\na=ssrc:558827154 cname:vdKHBNqa17t2gmE3\r\na=ssrc:558827154 msid:6OOMyGAyJakjwaOOBtV7WcBCCuIW6PpuXsNg bf1303fb-9833-4d7d-b9e4-b32cfe04acc3\r\na=ssrc:558827154 mslabel:6OOMyGAyJakjwaOOBtV7WcBCCuIW6PpuXsNg\r\na=ssrc:558827154 label:bf1303fb-9833-4d7d-b9e4-b32cfe04acc3\r\n", "dialogParams": { "useVideo": false, "useStereo": true, "tag": "webcam", "login": "1008@webrtc.freeswitch.org", "destination_number": "9664", "caller_id_name": "FreeSWITCH User", "caller_id_number": "1008", "callID": "12795aa6-2a8d-84ee-ce63-2e82ffe825ef", "remote_caller_id_name": "Outbound Call", "remote_caller_id_number": "9664" }, "sessid": "03a11060-3e14-23b6-c620-51b892c52983" }, "id": 4 } { "jsonrpc": "2.0", "id": 3, "result": { "message": "logged in", "sessid": "03a11060-3e14-23b6-c620-51b892c52983" } } { "jsonrpc": "2.0", "id": 1, "error": { "code": -32000, "message": "Authentication Required" } } { "jsonrpc": "2.0", "method": "login", "params": { "login": "1008@webrtc.freeswitch.org", "passwd": "1234", "sessid": "03a11060-3e14-23b6-c620-51b892c52983" }, "id": 3 } { "jsonrpc": "2.0", "id": 2, "error": { "code": -32000, "message": "Authentication Required" } } { "jsonrpc": "2.0", "method": "login", "params": { "sessid": "03a11060-3e14-23b6-c620-51b892c52983" }, "id": 1 } { "jsonrpc": "2.0", "method": "login", "params": { "sessid": "03a11060-3e14-23b6-c620-51b892c52983" }, "id": 2 } synapse-0.24.0/demo/000077500000000000000000000000001317335640100142065ustar00rootroot00000000000000synapse-0.24.0/demo/README000066400000000000000000000014641317335640100150730ustar00rootroot00000000000000Requires you to have done: python setup.py develop The demo start.sh will start three synapse servers on ports 8080, 8081 and 8082, with host names localhost:$port. This can be easily changed to `hostname`:$port in start.sh if required. It will also start a web server on port 8000 pointed at the webclient. stop.sh will stop the synapse servers and the webclient. clean.sh will delete the databases and log files. To start a completely new set of servers, run: ./demo/stop.sh; ./demo/clean.sh && ./demo/start.sh Logs and sqlitedb will be stored in demo/808{0,1,2}.{log,db} Also note that when joining a public room on a differnt HS via "#foo:bar.net", then you are (in the current impl) joining a room with room_id "foo". This means that it won't work if your HS already has a room with that name. synapse-0.24.0/demo/clean.sh000077500000000000000000000004211317335640100156240ustar00rootroot00000000000000#!/bin/bash set -e DIR="$( cd "$( dirname "$0" )" && pwd )" PID_FILE="$DIR/servers.pid" if [ -f $PID_FILE ]; then echo "servers.pid exists!" exit 1 fi for port in 8080 8081 8082; do rm -rf $DIR/$port rm -rf $DIR/media_store.$port done rm -rf $DIR/etc synapse-0.24.0/demo/demo.tls.dh000066400000000000000000000007221317335640100162510ustar00rootroot000000000000002048-bit DH parameters taken from rfc3526 -----BEGIN DH PARAMETERS----- MIIBCAKCAQEA///////////JD9qiIWjCNMTGYouA3BzRKQJOCIpnzHQCC76mOxOb IlFKCHmONATd75UZs806QxswKwpt8l8UN0/hNW1tUcJF5IW1dmJefsb0TELppjft awv/XLb0Brft7jhr+1qJn6WunyQRfEsf5kkoZlHs5Fs9wgB8uKFjvwWY2kg2HFXT mmkWP6j9JM9fg2VdI9yjrZYcYvNWIIVSu57VKQdwlpZtZww1Tkq8mATxdGwIyhgh fDKQXkYuNs474553LBgOhgObJ4Oi7Aeij7XFXfBvTFLJ3ivL9pVYFxg5lUl86pVq 5RXSJhiY+gUQFXKOWoqsqmj//////////wIBAg== -----END DH PARAMETERS----- synapse-0.24.0/demo/start.sh000077500000000000000000000026541317335640100157110ustar00rootroot00000000000000#!/bin/bash DIR="$( cd "$( dirname "$0" )" && pwd )" CWD=$(pwd) cd "$DIR/.." mkdir -p demo/etc export PYTHONPATH=$(readlink -f $(pwd)) echo $PYTHONPATH for port in 8080 8081 8082; do echo "Starting server on port $port... " https_port=$((port + 400)) mkdir -p demo/$port pushd demo/$port #rm $DIR/etc/$port.config python -m synapse.app.homeserver \ --generate-config \ -H "localhost:$https_port" \ --config-path "$DIR/etc/$port.config" \ --report-stats no # Check script parameters if [ $# -eq 1 ]; then if [ $1 = "--no-rate-limit" ]; then # Set high limits in config file to disable rate limiting perl -p -i -e 's/rc_messages_per_second.*/rc_messages_per_second: 1000/g' $DIR/etc/$port.config perl -p -i -e 's/rc_message_burst_count.*/rc_message_burst_count: 1000/g' $DIR/etc/$port.config fi fi perl -p -i -e 's/^enable_registration:.*/enable_registration: true/g' $DIR/etc/$port.config if ! grep -F "full_twisted_stacktraces" -q $DIR/etc/$port.config; then echo "full_twisted_stacktraces: true" >> $DIR/etc/$port.config fi if ! grep -F "report_stats" -q $DIR/etc/$port.config ; then echo "report_stats: false" >> $DIR/etc/$port.config fi python -m synapse.app.homeserver \ --config-path "$DIR/etc/$port.config" \ -D \ -vv \ popd done cd "$CWD" synapse-0.24.0/demo/stop.sh000077500000000000000000000003741317335640100155360ustar00rootroot00000000000000#!/bin/bash DIR="$( cd "$( dirname "$0" )" && pwd )" FILES=$(find "$DIR" -name "*.pid" -type f); for pid_file in $FILES; do pid=$(cat "$pid_file") if [[ $pid ]]; then echo "Killing $pid_file with $pid" kill $pid fi done synapse-0.24.0/demo/webserver.py000066400000000000000000000030461317335640100165670ustar00rootroot00000000000000import argparse import BaseHTTPServer import os import SimpleHTTPServer import cgi, logging from daemonize import Daemonize class SimpleHTTPRequestHandlerWithPOST(SimpleHTTPServer.SimpleHTTPRequestHandler): UPLOAD_PATH = "upload" """ Accept all post request as file upload """ def do_POST(self): path = os.path.join(self.UPLOAD_PATH, os.path.basename(self.path)) length = self.headers['content-length'] data = self.rfile.read(int(length)) with open(path, 'wb') as fh: fh.write(data) self.send_response(200) self.send_header('Content-Type', 'application/json') self.end_headers() # Return the absolute path of the uploaded file self.wfile.write('{"url":"/%s"}' % path) def setup(): parser = argparse.ArgumentParser() parser.add_argument("directory") parser.add_argument("-p", "--port", dest="port", type=int, default=8080) parser.add_argument('-P', "--pid-file", dest="pid", default="web.pid") args = parser.parse_args() # Get absolute path to directory to serve, as daemonize changes to '/' os.chdir(args.directory) dr = os.getcwd() httpd = BaseHTTPServer.HTTPServer( ('', args.port), SimpleHTTPRequestHandlerWithPOST ) def run(): os.chdir(dr) httpd.serve_forever() daemon = Daemonize( app="synapse-webclient", pid=args.pid, action=run, auto_close_fds=False, ) daemon.start() if __name__ == '__main__': setup() synapse-0.24.0/docs/000077500000000000000000000000001317335640100142125ustar00rootroot00000000000000synapse-0.24.0/docs/CAPTCHA_SETUP.rst000066400000000000000000000020551317335640100167710ustar00rootroot00000000000000Captcha can be enabled for this home server. This file explains how to do that. The captcha mechanism used is Google's ReCaptcha. This requires API keys from Google. Getting keys ------------ Requires a public/private key pair from: https://developers.google.com/recaptcha/ Setting ReCaptcha Keys ---------------------- The keys are a config option on the home server config. If they are not visible, you can generate them via --generate-config. Set the following value:: recaptcha_public_key: YOUR_PUBLIC_KEY recaptcha_private_key: YOUR_PRIVATE_KEY In addition, you MUST enable captchas via:: enable_registration_captcha: true Configuring IP used for auth ---------------------------- The ReCaptcha API requires that the IP address of the user who solved the captcha is sent. If the client is connecting through a proxy or load balancer, it may be required to use the X-Forwarded-For (XFF) header instead of the origin IP address. This can be configured using the x_forwarded directive in the listeners section of the homeserver.yaml configuration file. synapse-0.24.0/docs/README.rst000066400000000000000000000003671317335640100157070ustar00rootroot00000000000000All matrix-generic documentation now lives in its own project at github.com/matrix-org/matrix-doc.git Only Synapse implementation-specific documentation lives here now (together with some older stuff will be shortly migrated over to matrix-doc) synapse-0.24.0/docs/admin_api/000077500000000000000000000000001317335640100161335ustar00rootroot00000000000000synapse-0.24.0/docs/admin_api/README.rst000066400000000000000000000005601317335640100176230ustar00rootroot00000000000000Admin APIs ========== This directory includes documentation for the various synapse specific admin APIs available. Only users that are server admins can use these APIs. A user can be marked as a server admin by updating the database directly, e.g.: ``UPDATE users SET admin = 1 WHERE name = '@foo:bar.com'`` Restarting may be required for the changes to register. synapse-0.24.0/docs/admin_api/purge_history_api.rst000066400000000000000000000013001317335640100224130ustar00rootroot00000000000000Purge History API ================= The purge history API allows server admins to purge historic events from their database, reclaiming disk space. **NB!** This will not delete local events (locally sent messages content etc) from the database, but will remove lots of the metadata about them and does dramatically reduce the on disk space usage Depending on the amount of history being purged a call to the API may take several minutes or longer. During this period users will not be able to paginate further back in the room from the point being purged from. The API is simply: ``POST /_matrix/client/r0/admin/purge_history//`` including an ``access_token`` of a server admin. synapse-0.24.0/docs/admin_api/purge_remote_media.rst000066400000000000000000000007241317335640100225240ustar00rootroot00000000000000Purge Remote Media API ====================== The purge remote media API allows server admins to purge old cached remote media. The API is:: POST /_matrix/client/r0/admin/purge_media_cache?before_ts=&access_token= {} Which will remove all cached media that was last accessed before ````. If the user re-requests purged remote media, synapse will re-request the media from the originating server. synapse-0.24.0/docs/admin_api/user_admin_api.rst000066400000000000000000000032321317335640100216440ustar00rootroot00000000000000Query Account ============= This API returns information about a specific user account. The api is:: GET /_matrix/client/r0/admin/whois/ including an ``access_token`` of a server admin. It returns a JSON body like the following: .. code:: json { "user_id": "", "devices": { "": { "sessions": [ { "connections": [ { "ip": "1.2.3.4", "last_seen": 1417222374433, "user_agent": "Mozilla/5.0 ..." }, { "ip": "1.2.3.10", "last_seen": 1417222374500, "user_agent": "Dalvik/2.1.0 ..." } ] } ] } } } ``last_seen`` is measured in milliseconds since the Unix epoch. Deactivate Account ================== This API deactivates an account. It removes active access tokens, resets the password, and deletes third-party IDs (to prevent the user requesting a password reset). The api is:: POST /_matrix/client/r0/admin/deactivate/ including an ``access_token`` of a server admin, and an empty request body. Reset password ============== Changes the password of another user. The api is:: POST /_matrix/client/r0/admin/reset_password/ with a body of: .. code:: json { "new_password": "" } including an ``access_token`` of a server admin. synapse-0.24.0/docs/ancient_architecture_notes.rst000066400000000000000000000055071317335640100223460ustar00rootroot00000000000000.. WARNING:: These architecture notes are spectacularly old, and date back to when Synapse was just federation code in isolation. This should be merged into the main spec. = Server to Server = == Server to Server Stack == To use the server to server stack, home servers should only need to interact with the Messaging layer. The server to server side of things is designed into 4 distinct layers: 1. Messaging Layer 2. Pdu Layer 3. Transaction Layer 4. Transport Layer Where the bottom (the transport layer) is what talks to the internet via HTTP, and the top (the messaging layer) talks to the rest of the Home Server with a domain specific API. 1. Messaging Layer This is what the rest of the Home Server hits to send messages, join rooms, etc. It also allows you to register callbacks for when it get's notified by lower levels that e.g. a new message has been received. It is responsible for serializing requests to send to the data layer, and to parse requests received from the data layer. 2. PDU Layer This layer handles: * duplicate pdu_id's - i.e., it makes sure we ignore them. * responding to requests for a given pdu_id * responding to requests for all metadata for a given context (i.e. room) * handling incoming backfill requests So it has to parse incoming messages to discover which are metadata and which aren't, and has to correctly clobber existing metadata where appropriate. For incoming PDUs, it has to check the PDUs it references to see if we have missed any. If we have go and ask someone (another home server) for it. 3. Transaction Layer This layer makes incoming requests idempotent. I.e., it stores which transaction id's we have seen and what our response were. If we have already seen a message with the given transaction id, we do not notify higher levels but simply respond with the previous response. transaction_id is from "GET /send//" It's also responsible for batching PDUs into single transaction for sending to remote destinations, so that we only ever have one transaction in flight to a given destination at any one time. This is also responsible for answering requests for things after a given set of transactions, i.e., ask for everything after 'ver' X. 4. Transport Layer This is responsible for starting a HTTP server and hitting the correct callbacks on the Transaction layer, as well as sending both data and requests for data. == Persistence == We persist things in a single sqlite3 database. All database queries get run on a separate, dedicated thread. This that we only ever have one query running at a time, making it a lot easier to do things in a safe manner. The queries are located in the synapse.persistence.transactions module, and the table information in the synapse.persistence.tables module. synapse-0.24.0/docs/application_services.rst000066400000000000000000000020651317335640100211550ustar00rootroot00000000000000Registering an Application Service ================================== The registration of new application services depends on the homeserver used. In synapse, you need to create a new configuration file for your AS and add it to the list specified under the ``app_service_config_files`` config option in your synapse config. For example: .. code-block:: yaml app_service_config_files: - /home/matrix/.synapse/.yaml The format of the AS configuration file is as follows: .. code-block:: yaml url: as_token: hs_token: sender_localpart: namespaces: users: # List of users we're interested in - exclusive: regex: - ... aliases: [] # List of aliases we're interested in rooms: [] # List of room ids we're interested in See the spec_ for further details on how application services work. .. _spec: https://matrix.org/docs/spec/application_service/unstable.html synapse-0.24.0/docs/architecture.rst000066400000000000000000000066571317335640100174440ustar00rootroot00000000000000Synapse Architecture ==================== As of the end of Oct 2014, Synapse's overall architecture looks like:: synapse .-----------------------------------------------------. | Notifier | | ^ | | | | | | | .------------|------. | | | handlers/ | | | | | v | | | | Event*Handler <--------> rest/* <=> Client | | Rooms*Handler | | HSes <=> federation/* <==> FederationHandler | | | | | PresenceHandler | | | | | TypingHandler | | | | '-------------------' | | | | | | | | state/* | | | | | | | | | v v | | `--------------> storage/* | | | | '--------------------------|--------------------------' v .----. | DB | '----' * Handlers: business logic of synapse itself. Follows a set contract of BaseHandler: - BaseHandler gives us onNewRoomEvent which: (TODO: flesh this out and make it less cryptic): + handle_state(event) + auth(event) + persist_event(event) + notify notifier or federation(event) - PresenceHandler: use distributor to get EDUs out of Federation. Very lightweight logic built on the distributor - TypingHandler: use distributor to get EDUs out of Federation. Very lightweight logic built on the distributor - EventsHandler: handles the events stream... - FederationHandler: - gets PDU from Federation Layer; turns into an event; follows basehandler functionality. - RoomsHandler: does all the room logic, including members - lots of classes in RoomsHandler. - ProfileHandler: talks to the storage to store/retrieve profile info. * EventFactory: generates events of particular event types. * Notifier: Backs the events handler * REST: Interfaces handlers and events to the outside world via HTTP/JSON. Converts events back and forth from JSON. * Federation: holds the HTTP client & server to talk to other servers. Does replication to make sure there's nothing missing in the graph. Handles reliability. Handles txns. * Distributor: generic event bus. used for presence & typing only currently. Notifier could be implemented using Distributor - so far we are only using for things which actually /require/ dynamic pluggability however as it can obfuscate the actual flow of control. * Auth: helper singleton to say whether a given event is allowed to do a given thing (TODO: put this on the diagram) * State: helper singleton: does state conflict resolution. You give it an event and it tells you if it actually updates the state or not, and annotates the event up properly and handles merge conflict resolution. * Storage: abstracts the storage engine. synapse-0.24.0/docs/code_style.rst000066400000000000000000000041321317335640100170760ustar00rootroot00000000000000Basically, PEP8 - NEVER tabs. 4 spaces to indent. - Max line width: 79 chars (with flexibility to overflow by a "few chars" if the overflowing content is not semantically significant and avoids an explosion of vertical whitespace). - Use camel case for class and type names - Use underscores for functions and variables. - Use double quotes. - Use parentheses instead of '\\' for line continuation where ever possible (which is pretty much everywhere) - There should be max a single new line between: - statements - functions in a class - There should be two new lines between: - definitions in a module (e.g., between different classes) - There should be spaces where spaces should be and not where there shouldn't be: - a single space after a comma - a single space before and after for '=' when used as assignment - no spaces before and after for '=' for default values and keyword arguments. - Indenting must follow PEP8; either hanging indent or multiline-visual indent depending on the size and shape of the arguments and what makes more sense to the author. In other words, both this:: print("I am a fish %s" % "moo") and this:: print("I am a fish %s" % "moo") and this:: print( "I am a fish %s" % "moo" ) ...are valid, although given each one takes up 2x more vertical space than the previous, it's up to the author's discretion as to which layout makes most sense for their function invocation. (e.g. if they want to add comments per-argument, or put expressions in the arguments, or group related arguments together, or want to deliberately extend or preserve vertical/horizontal space) Comments should follow the `google code style `_. This is so that we can generate documentation with `sphinx `_. See the `examples `_ in the sphinx documentation. Code should pass pep8 --max-line-length=100 without any warnings. synapse-0.24.0/docs/log_contexts.rst000066400000000000000000000411301317335640100174530ustar00rootroot00000000000000Log contexts ============ .. contents:: To help track the processing of individual requests, synapse uses a 'log context' to track which request it is handling at any given moment. This is done via a thread-local variable; a ``logging.Filter`` is then used to fish the information back out of the thread-local variable and add it to each log record. Logcontexts are also used for CPU and database accounting, so that we can track which requests were responsible for high CPU use or database activity. The ``synapse.util.logcontext`` module provides a facilities for managing the current log context (as well as providing the ``LoggingContextFilter`` class). Deferreds make the whole thing complicated, so this document describes how it all works, and how to write code which follows the rules. Logcontexts without Deferreds ----------------------------- In the absence of any Deferred voodoo, things are simple enough. As with any code of this nature, the rule is that our function should leave things as it found them: .. code:: python from synapse.util import logcontext # omitted from future snippets def handle_request(request_id): request_context = logcontext.LoggingContext() calling_context = logcontext.LoggingContext.current_context() logcontext.LoggingContext.set_current_context(request_context) try: request_context.request = request_id do_request_handling() logger.debug("finished") finally: logcontext.LoggingContext.set_current_context(calling_context) def do_request_handling(): logger.debug("phew") # this will be logged against request_id LoggingContext implements the context management methods, so the above can be written much more succinctly as: .. code:: python def handle_request(request_id): with logcontext.LoggingContext() as request_context: request_context.request = request_id do_request_handling() logger.debug("finished") def do_request_handling(): logger.debug("phew") Using logcontexts with Deferreds -------------------------------- Deferreds — and in particular, ``defer.inlineCallbacks`` — break the linear flow of code so that there is no longer a single entry point where we should set the logcontext and a single exit point where we should remove it. Consider the example above, where ``do_request_handling`` needs to do some blocking operation, and returns a deferred: .. code:: python @defer.inlineCallbacks def handle_request(request_id): with logcontext.LoggingContext() as request_context: request_context.request = request_id yield do_request_handling() logger.debug("finished") In the above flow: * The logcontext is set * ``do_request_handling`` is called, and returns a deferred * ``handle_request`` yields the deferred * The ``inlineCallbacks`` wrapper of ``handle_request`` returns a deferred So we have stopped processing the request (and will probably go on to start processing the next), without clearing the logcontext. To circumvent this problem, synapse code assumes that, wherever you have a deferred, you will want to yield on it. To that end, whereever functions return a deferred, we adopt the following conventions: **Rules for functions returning deferreds:** * If the deferred is already complete, the function returns with the same logcontext it started with. * If the deferred is incomplete, the function clears the logcontext before returning; when the deferred completes, it restores the logcontext before running any callbacks. That sounds complicated, but actually it means a lot of code (including the example above) "just works". There are two cases: * If ``do_request_handling`` returns a completed deferred, then the logcontext will still be in place. In this case, execution will continue immediately after the ``yield``; the "finished" line will be logged against the right context, and the ``with`` block restores the original context before we return to the caller. * If the returned deferred is incomplete, ``do_request_handling`` clears the logcontext before returning. The logcontext is therefore clear when ``handle_request`` yields the deferred. At that point, the ``inlineCallbacks`` wrapper adds a callback to the deferred, and returns another (incomplete) deferred to the caller, and it is safe to begin processing the next request. Once ``do_request_handling``'s deferred completes, it will reinstate the logcontext, before running the callback added by the ``inlineCallbacks`` wrapper. That callback runs the second half of ``handle_request``, so again the "finished" line will be logged against the right context, and the ``with`` block restores the original context. As an aside, it's worth noting that ``handle_request`` follows our rules - though that only matters if the caller has its own logcontext which it cares about. The following sections describe pitfalls and helpful patterns when implementing these rules. Always yield your deferreds --------------------------- Whenever you get a deferred back from a function, you should ``yield`` on it as soon as possible. (Returning it directly to your caller is ok too, if you're not doing ``inlineCallbacks``.) Do not pass go; do not do any logging; do not call any other functions. .. code:: python @defer.inlineCallbacks def fun(): logger.debug("starting") yield do_some_stuff() # just like this d = more_stuff() result = yield d # also fine, of course defer.returnValue(result) def nonInlineCallbacksFun(): logger.debug("just a wrapper really") return do_some_stuff() # this is ok too - the caller will yield on # it anyway. Provided this pattern is followed all the way back up to the callchain to where the logcontext was set, this will make things work out ok: provided ``do_some_stuff`` and ``more_stuff`` follow the rules above, then so will ``fun`` (as wrapped by ``inlineCallbacks``) and ``nonInlineCallbacksFun``. It's all too easy to forget to ``yield``: for instance if we forgot that ``do_some_stuff`` returned a deferred, we might plough on regardless. This leads to a mess; it will probably work itself out eventually, but not before a load of stuff has been logged against the wrong content. (Normally, other things will break, more obviously, if you forget to ``yield``, so this tends not to be a major problem in practice.) Of course sometimes you need to do something a bit fancier with your Deferreds - not all code follows the linear A-then-B-then-C pattern. Notes on implementing more complex patterns are in later sections. Where you create a new Deferred, make it follow the rules --------------------------------------------------------- Most of the time, a Deferred comes from another synapse function. Sometimes, though, we need to make up a new Deferred, or we get a Deferred back from external code. We need to make it follow our rules. The easy way to do it is with a combination of ``defer.inlineCallbacks``, and ``logcontext.PreserveLoggingContext``. Suppose we want to implement ``sleep``, which returns a deferred which will run its callbacks after a given number of seconds. That might look like: .. code:: python # not a logcontext-rules-compliant function def get_sleep_deferred(seconds): d = defer.Deferred() reactor.callLater(seconds, d.callback, None) return d That doesn't follow the rules, but we can fix it by wrapping it with ``PreserveLoggingContext`` and ``yield`` ing on it: .. code:: python @defer.inlineCallbacks def sleep(seconds): with PreserveLoggingContext(): yield get_sleep_deferred(seconds) This technique works equally for external functions which return deferreds, or deferreds we have made ourselves. You can also use ``logcontext.make_deferred_yieldable``, which just does the boilerplate for you, so the above could be written: .. code:: python def sleep(seconds): return logcontext.make_deferred_yieldable(get_sleep_deferred(seconds)) Fire-and-forget --------------- Sometimes you want to fire off a chain of execution, but not wait for its result. That might look a bit like this: .. code:: python @defer.inlineCallbacks def do_request_handling(): yield foreground_operation() # *don't* do this background_operation() logger.debug("Request handling complete") @defer.inlineCallbacks def background_operation(): yield first_background_step() logger.debug("Completed first step") yield second_background_step() logger.debug("Completed second step") The above code does a couple of steps in the background after ``do_request_handling`` has finished. The log lines are still logged against the ``request_context`` logcontext, which may or may not be desirable. There are two big problems with the above, however. The first problem is that, if ``background_operation`` returns an incomplete Deferred, it will expect its caller to ``yield`` immediately, so will have cleared the logcontext. In this example, that means that 'Request handling complete' will be logged without any context. The second problem, which is potentially even worse, is that when the Deferred returned by ``background_operation`` completes, it will restore the original logcontext. There is nothing waiting on that Deferred, so the logcontext will leak into the reactor and possibly get attached to some arbitrary future operation. There are two potential solutions to this. One option is to surround the call to ``background_operation`` with a ``PreserveLoggingContext`` call. That will reset the logcontext before starting ``background_operation`` (so the context restored when the deferred completes will be the empty logcontext), and will restore the current logcontext before continuing the foreground process: .. code:: python @defer.inlineCallbacks def do_request_handling(): yield foreground_operation() # start background_operation off in the empty logcontext, to # avoid leaking the current context into the reactor. with PreserveLoggingContext(): background_operation() # this will now be logged against the request context logger.debug("Request handling complete") Obviously that option means that the operations done in ``background_operation`` would be not be logged against a logcontext (though that might be fixed by setting a different logcontext via a ``with LoggingContext(...)`` in ``background_operation``). The second option is to use ``logcontext.preserve_fn``, which wraps a function so that it doesn't reset the logcontext even when it returns an incomplete deferred, and adds a callback to the returned deferred to reset the logcontext. In other words, it turns a function that follows the Synapse rules about logcontexts and Deferreds into one which behaves more like an external function — the opposite operation to that described in the previous section. It can be used like this: .. code:: python @defer.inlineCallbacks def do_request_handling(): yield foreground_operation() logcontext.preserve_fn(background_operation)() # this will now be logged against the request context logger.debug("Request handling complete") XXX: I think ``preserve_context_over_fn`` is supposed to do the first option, but the fact that it does ``preserve_context_over_deferred`` on its results means that its use is fraught with difficulty. Passing synapse deferreds into third-party functions ---------------------------------------------------- A typical example of this is where we want to collect together two or more deferred via ``defer.gatherResults``: .. code:: python d1 = operation1() d2 = operation2() d3 = defer.gatherResults([d1, d2]) This is really a variation of the fire-and-forget problem above, in that we are firing off ``d1`` and ``d2`` without yielding on them. The difference is that we now have third-party code attached to their callbacks. Anyway either technique given in the `Fire-and-forget`_ section will work. Of course, the new Deferred returned by ``gatherResults`` needs to be wrapped in order to make it follow the logcontext rules before we can yield it, as described in `Where you create a new Deferred, make it follow the rules`_. So, option one: reset the logcontext before starting the operations to be gathered: .. code:: python @defer.inlineCallbacks def do_request_handling(): with PreserveLoggingContext(): d1 = operation1() d2 = operation2() result = yield defer.gatherResults([d1, d2]) In this case particularly, though, option two, of using ``logcontext.preserve_fn`` almost certainly makes more sense, so that ``operation1`` and ``operation2`` are both logged against the original logcontext. This looks like: .. code:: python @defer.inlineCallbacks def do_request_handling(): d1 = logcontext.preserve_fn(operation1)() d2 = logcontext.preserve_fn(operation2)() with PreserveLoggingContext(): result = yield defer.gatherResults([d1, d2]) Was all this really necessary? ------------------------------ The conventions used work fine for a linear flow where everything happens in series via ``defer.inlineCallbacks`` and ``yield``, but are certainly tricky to follow for any more exotic flows. It's hard not to wonder if we could have done something else. We're not going to rewrite Synapse now, so the following is entirely of academic interest, but I'd like to record some thoughts on an alternative approach. I briefly prototyped some code following an alternative set of rules. I think it would work, but I certainly didn't get as far as thinking how it would interact with concepts as complicated as the cache descriptors. My alternative rules were: * functions always preserve the logcontext of their caller, whether or not they are returning a Deferred. * Deferreds returned by synapse functions run their callbacks in the same context as the function was orignally called in. The main point of this scheme is that everywhere that sets the logcontext is responsible for clearing it before returning control to the reactor. So, for example, if you were the function which started a ``with LoggingContext`` block, you wouldn't ``yield`` within it — instead you'd start off the background process, and then leave the ``with`` block to wait for it: .. code:: python def handle_request(request_id): with logcontext.LoggingContext() as request_context: request_context.request = request_id d = do_request_handling() def cb(r): logger.debug("finished") d.addCallback(cb) return d (in general, mixing ``with LoggingContext`` blocks and ``defer.inlineCallbacks`` in the same function leads to slighly counter-intuitive code, under this scheme). Because we leave the original ``with`` block as soon as the Deferred is returned (as opposed to waiting for it to be resolved, as we do today), the logcontext is cleared before control passes back to the reactor; so if there is some code within ``do_request_handling`` which needs to wait for a Deferred to complete, there is no need for it to worry about clearing the logcontext before doing so: .. code:: python def handle_request(): r = do_some_stuff() r.addCallback(do_some_more_stuff) return r — and provided ``do_some_stuff`` follows the rules of returning a Deferred which runs its callbacks in the original logcontext, all is happy. The business of a Deferred which runs its callbacks in the original logcontext isn't hard to achieve — we have it today, in the shape of ``logcontext._PreservingContextDeferred``: .. code:: python def do_some_stuff(): deferred = do_some_io() pcd = _PreservingContextDeferred(LoggingContext.current_context()) deferred.chainDeferred(pcd) return pcd It turns out that, thanks to the way that Deferreds chain together, we automatically get the property of a context-preserving deferred with ``defer.inlineCallbacks``, provided the final Defered the function ``yields`` on has that property. So we can just write: .. code:: python @defer.inlineCallbacks def handle_request(): yield do_some_stuff() yield do_some_more_stuff() To conclude: I think this scheme would have worked equally well, with less danger of messing it up, and probably made some more esoteric code easier to write. But again — changing the conventions of the entire Synapse codebase is not a sensible option for the marginal improvement offered. synapse-0.24.0/docs/media_repository.rst000066400000000000000000000025171317335640100203270ustar00rootroot00000000000000Media Repository ================ *Synapse implementation-specific details for the media repository* The media repository is where attachments and avatar photos are stored. It stores attachment content and thumbnails for media uploaded by local users. It caches attachment content and thumbnails for media uploaded by remote users. Storage ------- Each item of media is assigned a ``media_id`` when it is uploaded. The ``media_id`` is a randomly chosen, URL safe 24 character string. Metadata such as the MIME type, upload time and length are stored in the sqlite3 database indexed by ``media_id``. Content is stored on the filesystem under a ``"local_content"`` directory. Thumbnails are stored under a ``"local_thumbnails"`` directory. The item with ``media_id`` ``"aabbccccccccdddddddddddd"`` is stored under ``"local_content/aa/bb/ccccccccdddddddddddd"``. Its thumbnail with width ``128`` and height ``96`` and type ``"image/jpeg"`` is stored under ``"local_thumbnails/aa/bb/ccccccccdddddddddddd/128-96-image-jpeg"`` Remote content is cached under ``"remote_content"`` directory. Each item of remote content is assigned a local "``filesystem_id``" to ensure that the directory structure ``"remote_content/server_name/aa/bb/ccccccccdddddddddddd"`` is appropriate. Thumbnails for remote content are stored under ``"remote_thumbnails/server_name/..."`` synapse-0.24.0/docs/metrics-howto.rst000066400000000000000000000046361317335640100175610ustar00rootroot00000000000000How to monitor Synapse metrics using Prometheus =============================================== 1. Install prometheus: Follow instructions at http://prometheus.io/docs/introduction/install/ 2. Enable synapse metrics: Simply setting a (local) port number will enable it. Pick a port. prometheus itself defaults to 9090, so starting just above that for locally monitored services seems reasonable. E.g. 9092: Add to homeserver.yaml:: metrics_port: 9092 Also ensure that ``enable_metrics`` is set to ``True``. Restart synapse. 3. Add a prometheus target for synapse. It needs to set the ``metrics_path`` to a non-default value (under ``scrape_configs``):: - job_name: "synapse" metrics_path: "/_synapse/metrics" static_configs: - targets: ["my.server.here:9092"] If your prometheus is older than 1.5.2, you will need to replace ``static_configs`` in the above with ``target_groups``. Restart prometheus. Standard Metric Names --------------------- As of synapse version 0.18.2, the format of the process-wide metrics has been changed to fit prometheus standard naming conventions. Additionally the units have been changed to seconds, from miliseconds. ================================== ============================= New name Old name ---------------------------------- ----------------------------- process_cpu_user_seconds_total process_resource_utime / 1000 process_cpu_system_seconds_total process_resource_stime / 1000 process_open_fds (no 'type' label) process_fds ================================== ============================= The python-specific counts of garbage collector performance have been renamed. =========================== ====================== New name Old name --------------------------- ---------------------- python_gc_time reactor_gc_time python_gc_unreachable_total reactor_gc_unreachable python_gc_counts reactor_gc_counts =========================== ====================== The twisted-specific reactor metrics have been renamed. ==================================== ===================== New name Old name ------------------------------------ --------------------- python_twisted_reactor_pending_calls reactor_pending_calls python_twisted_reactor_tick_time reactor_tick_time ==================================== ===================== synapse-0.24.0/docs/postgres.rst000066400000000000000000000075301317335640100166170ustar00rootroot00000000000000Using Postgres -------------- Postgres version 9.4 or later is known to work. Set up database =============== The PostgreSQL database used *must* have the correct encoding set, otherwise would not be able to store UTF8 strings. To create a database with the correct encoding use, e.g.:: CREATE DATABASE synapse ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' template=template0 OWNER synapse_user; This would create an appropriate database named ``synapse`` owned by the ``synapse_user`` user (which must already exist). Set up client in Debian/Ubuntu =========================== Postgres support depends on the postgres python connector ``psycopg2``. In the virtual env:: sudo apt-get install libpq-dev pip install psycopg2 Set up client in RHEL/CentOs 7 ============================== Make sure you have the appropriate version of postgres-devel installed. For a postgres 9.4, use the postgres 9.4 packages from [here](https://wiki.postgresql.org/wiki/YUM_Installation). As with Debian/Ubuntu, postgres support depends on the postgres python connector ``psycopg2``. In the virtual env:: sudo yum install postgresql-devel libpqxx-devel.x86_64 export PATH=/usr/pgsql-9.4/bin/:$PATH pip install psycopg2 Synapse config ============== When you are ready to start using PostgreSQL, add the following line to your config file:: database: name: psycopg2 args: user: password: database: host: cp_min: 5 cp_max: 10 All key, values in ``args`` are passed to the ``psycopg2.connect(..)`` function, except keys beginning with ``cp_``, which are consumed by the twisted adbapi connection pool. Porting from SQLite =================== Overview ~~~~~~~~ The script ``synapse_port_db`` allows porting an existing synapse server backed by SQLite to using PostgreSQL. This is done in as a two phase process: 1. Copy the existing SQLite database to a separate location (while the server is down) and running the port script against that offline database. 2. Shut down the server. Rerun the port script to port any data that has come in since taking the first snapshot. Restart server against the PostgreSQL database. The port script is designed to be run repeatedly against newer snapshots of the SQLite database file. This makes it safe to repeat step 1 if there was a delay between taking the previous snapshot and being ready to do step 2. It is safe to at any time kill the port script and restart it. Using the port script ~~~~~~~~~~~~~~~~~~~~~ Firstly, shut down the currently running synapse server and copy its database file (typically ``homeserver.db``) to another location. Once the copy is complete, restart synapse. For instance:: ./synctl stop cp homeserver.db homeserver.db.snapshot ./synctl start Assuming your new config file (as described in the section *Synapse config*) is named ``homeserver-postgres.yaml`` and the SQLite snapshot is at ``homeserver.db.snapshot`` then simply run:: synapse_port_db --sqlite-database homeserver.db.snapshot \ --postgres-config homeserver-postgres.yaml The flag ``--curses`` displays a coloured curses progress UI. If the script took a long time to complete, or time has otherwise passed since the original snapshot was taken, repeat the previous steps with a newer snapshot. To complete the conversion shut down the synapse server and run the port script one last time, e.g. if the SQLite database is at ``homeserver.db`` run:: synapse_port_db --sqlite-database homeserver.db \ --postgres-config homeserver-postgres.yaml Once that has completed, change the synapse config to point at the PostgreSQL database configuration file ``homeserver-postgres.yaml`` (i.e. rename it to ``homeserver.yaml``) and restart synapse. Synapse should now be running against PostgreSQL. synapse-0.24.0/docs/replication.rst000066400000000000000000000030741317335640100172610ustar00rootroot00000000000000Replication Architecture ======================== Motivation ---------- We'd like to be able to split some of the work that synapse does into multiple python processes. In theory multiple synapse processes could share a single postgresql database and we'd scale up by running more synapse processes. However much of synapse assumes that only one process is interacting with the database, both for assigning unique identifiers when inserting into tables, notifying components about new updates, and for invalidating its caches. So running multiple copies of the current code isn't an option. One way to run multiple processes would be to have a single writer process and multiple reader processes connected to the same database. In order to do this we'd need a way for the reader process to invalidate its in-memory caches when an update happens on the writer. One way to do this is for the writer to present an append-only log of updates which the readers can consume to invalidate their caches and to push updates to listening clients or pushers. Synapse already stores much of its data as an append-only log so that it can correctly respond to /sync requests so the amount of code changes needed to expose the append-only log to the readers should be fairly minimal. Architecture ------------ The Replication Protocol ~~~~~~~~~~~~~~~~~~~~~~~~ See ``tcp_replication.rst`` The Slaved DataStore ~~~~~~~~~~~~~~~~~~~~ There are read-only version of the synapse storage layer in ``synapse/replication/slave/storage`` that use the response of the replication API to invalidate their caches. synapse-0.24.0/docs/sphinx/000077500000000000000000000000001317335640100155235ustar00rootroot00000000000000synapse-0.24.0/docs/sphinx/README.rst000066400000000000000000000000631317335640100172110ustar00rootroot00000000000000TODO: how (if at all) is this actually maintained? synapse-0.24.0/docs/sphinx/conf.py000066400000000000000000000205111317335640100170210ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # Synapse documentation build configuration file, created by # sphinx-quickstart on Tue Jun 10 17:31:02 2014. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys import os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('..')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'sphinx.ext.coverage', 'sphinx.ext.ifconfig', 'sphinxcontrib.napoleon', ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Synapse' copyright = u'Copyright 2014-2017 OpenMarket Ltd, 2017 Vector Creations Ltd, 2017 New Vector Ltd' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '1.0' # The full version, including alpha/beta/rc tags. release = '1.0' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all # documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. #keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. #html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'Synapsedoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'Synapse.tex', u'Synapse Documentation', u'TNG', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'synapse', u'Synapse Documentation', [u'TNG'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'Synapse', u'Synapse Documentation', u'TNG', 'Synapse', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. #texinfo_no_detailmenu = False # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'http://docs.python.org/': None} napoleon_include_special_with_doc = True napoleon_use_ivar = True synapse-0.24.0/docs/sphinx/index.rst000066400000000000000000000006431317335640100173670ustar00rootroot00000000000000.. Synapse documentation master file, created by sphinx-quickstart on Tue Jun 10 17:31:02 2014. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to Synapse's documentation! =================================== Contents: .. toctree:: synapse Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` synapse-0.24.0/docs/sphinx/modules.rst000066400000000000000000000000721317335640100177240ustar00rootroot00000000000000synapse ======= .. toctree:: :maxdepth: 4 synapse synapse-0.24.0/docs/sphinx/synapse.api.auth.rst000066400000000000000000000002131317335640100214430ustar00rootroot00000000000000synapse.api.auth module ======================= .. automodule:: synapse.api.auth :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.constants.rst000066400000000000000000000002321317335640100225170ustar00rootroot00000000000000synapse.api.constants module ============================ .. automodule:: synapse.api.constants :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.dbobjects.rst000066400000000000000000000002321317335640100224420ustar00rootroot00000000000000synapse.api.dbobjects module ============================ .. automodule:: synapse.api.dbobjects :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.errors.rst000066400000000000000000000002211317335640100220150ustar00rootroot00000000000000synapse.api.errors module ========================= .. automodule:: synapse.api.errors :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.event_stream.rst000066400000000000000000000002431317335640100232010ustar00rootroot00000000000000synapse.api.event_stream module =============================== .. automodule:: synapse.api.event_stream :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.events.factory.rst000066400000000000000000000002511317335640100234560ustar00rootroot00000000000000synapse.api.events.factory module ================================= .. automodule:: synapse.api.events.factory :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.events.room.rst000066400000000000000000000002401317335640100227610ustar00rootroot00000000000000synapse.api.events.room module ============================== .. automodule:: synapse.api.events.room :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.events.rst000066400000000000000000000004231317335640100220110ustar00rootroot00000000000000synapse.api.events package ========================== Submodules ---------- .. toctree:: synapse.api.events.factory synapse.api.events.room Module contents --------------- .. automodule:: synapse.api.events :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.handlers.events.rst000066400000000000000000000002541317335640100236120ustar00rootroot00000000000000synapse.api.handlers.events module ================================== .. automodule:: synapse.api.handlers.events :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.handlers.factory.rst000066400000000000000000000002571317335640100237600ustar00rootroot00000000000000synapse.api.handlers.factory module =================================== .. automodule:: synapse.api.handlers.factory :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.handlers.federation.rst000066400000000000000000000002701317335640100244240ustar00rootroot00000000000000synapse.api.handlers.federation module ====================================== .. automodule:: synapse.api.handlers.federation :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.handlers.register.rst000066400000000000000000000002621317335640100241310ustar00rootroot00000000000000synapse.api.handlers.register module ==================================== .. automodule:: synapse.api.handlers.register :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.handlers.room.rst000066400000000000000000000002461317335640100232630ustar00rootroot00000000000000synapse.api.handlers.room module ================================ .. automodule:: synapse.api.handlers.room :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.handlers.rst000066400000000000000000000006001317335640100223020ustar00rootroot00000000000000synapse.api.handlers package ============================ Submodules ---------- .. toctree:: synapse.api.handlers.events synapse.api.handlers.factory synapse.api.handlers.federation synapse.api.handlers.register synapse.api.handlers.room Module contents --------------- .. automodule:: synapse.api.handlers :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.notifier.rst000066400000000000000000000002271317335640100223260ustar00rootroot00000000000000synapse.api.notifier module =========================== .. automodule:: synapse.api.notifier :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.register_events.rst000066400000000000000000000002541317335640100237170ustar00rootroot00000000000000synapse.api.register_events module ================================== .. automodule:: synapse.api.register_events :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.room_events.rst000066400000000000000000000002401317335640100230420ustar00rootroot00000000000000synapse.api.room_events module ============================== .. automodule:: synapse.api.room_events :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.rst000066400000000000000000000006471317335640100205160ustar00rootroot00000000000000synapse.api package =================== Subpackages ----------- .. toctree:: synapse.api.events synapse.api.handlers synapse.api.streams Submodules ---------- .. toctree:: synapse.api.auth synapse.api.constants synapse.api.errors synapse.api.notifier synapse.api.storage Module contents --------------- .. automodule:: synapse.api :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.server.rst000066400000000000000000000002211317335640100220070ustar00rootroot00000000000000synapse.api.server module ========================= .. automodule:: synapse.api.server :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.storage.rst000066400000000000000000000002241317335640100221500ustar00rootroot00000000000000synapse.api.storage module ========================== .. automodule:: synapse.api.storage :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.stream.rst000066400000000000000000000002211317335640100217740ustar00rootroot00000000000000synapse.api.stream module ========================= .. automodule:: synapse.api.stream :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.streams.event.rst000066400000000000000000000002461317335640100233060ustar00rootroot00000000000000synapse.api.streams.event module ================================ .. automodule:: synapse.api.streams.event :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.api.streams.rst000066400000000000000000000003721317335640100221660ustar00rootroot00000000000000synapse.api.streams package =========================== Submodules ---------- .. toctree:: synapse.api.streams.event Module contents --------------- .. automodule:: synapse.api.streams :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.app.homeserver.rst000066400000000000000000000002351317335640100226740ustar00rootroot00000000000000synapse.app.homeserver module ============================= .. automodule:: synapse.app.homeserver :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.app.rst000066400000000000000000000003371317335640100205210ustar00rootroot00000000000000synapse.app package =================== Submodules ---------- .. toctree:: synapse.app.homeserver Module contents --------------- .. automodule:: synapse.app :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.db.rst000066400000000000000000000002341317335640100203220ustar00rootroot00000000000000synapse.db package ================== Module contents --------------- .. automodule:: synapse.db :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.federation.handler.rst000066400000000000000000000002511317335640100234700ustar00rootroot00000000000000synapse.federation.handler module ================================= .. automodule:: synapse.federation.handler :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.federation.messaging.rst000066400000000000000000000002571317335640100240360ustar00rootroot00000000000000synapse.federation.messaging module =================================== .. automodule:: synapse.federation.messaging :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.federation.pdu_codec.rst000066400000000000000000000002571317335640100240060ustar00rootroot00000000000000synapse.federation.pdu_codec module =================================== .. automodule:: synapse.federation.pdu_codec :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.federation.persistence.rst000066400000000000000000000002651317335640100244040ustar00rootroot00000000000000synapse.federation.persistence module ===================================== .. automodule:: synapse.federation.persistence :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.federation.replication.rst000066400000000000000000000002651317335640100243710ustar00rootroot00000000000000synapse.federation.replication module ===================================== .. automodule:: synapse.federation.replication :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.federation.rst000066400000000000000000000006301317335640100220550ustar00rootroot00000000000000synapse.federation package ========================== Submodules ---------- .. toctree:: synapse.federation.handler synapse.federation.pdu_codec synapse.federation.persistence synapse.federation.replication synapse.federation.transport synapse.federation.units Module contents --------------- .. automodule:: synapse.federation :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.federation.transport.rst000066400000000000000000000002571317335640100241150ustar00rootroot00000000000000synapse.federation.transport module =================================== .. automodule:: synapse.federation.transport :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.federation.units.rst000066400000000000000000000002431317335640100232160ustar00rootroot00000000000000synapse.federation.units module =============================== .. automodule:: synapse.federation.units :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.persistence.rst000066400000000000000000000004761317335640100222710ustar00rootroot00000000000000synapse.persistence package =========================== Submodules ---------- .. toctree:: synapse.persistence.service synapse.persistence.tables synapse.persistence.transactions Module contents --------------- .. automodule:: synapse.persistence :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.persistence.service.rst000066400000000000000000000002541317335640100237220ustar00rootroot00000000000000synapse.persistence.service module ================================== .. automodule:: synapse.persistence.service :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.persistence.tables.rst000066400000000000000000000002511317335640100235310ustar00rootroot00000000000000synapse.persistence.tables module ================================= .. automodule:: synapse.persistence.tables :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.persistence.transactions.rst000066400000000000000000000002731317335640100247730ustar00rootroot00000000000000synapse.persistence.transactions module ======================================= .. automodule:: synapse.persistence.transactions :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.rest.base.rst000066400000000000000000000002161317335640100216230ustar00rootroot00000000000000synapse.rest.base module ======================== .. automodule:: synapse.rest.base :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.rest.events.rst000066400000000000000000000002241317335640100222140ustar00rootroot00000000000000synapse.rest.events module ========================== .. automodule:: synapse.rest.events :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.rest.register.rst000066400000000000000000000002321317335640100225330ustar00rootroot00000000000000synapse.rest.register module ============================ .. automodule:: synapse.rest.register :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.rest.room.rst000066400000000000000000000002161317335640100216650ustar00rootroot00000000000000synapse.rest.room module ======================== .. automodule:: synapse.rest.room :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.rest.rst000066400000000000000000000004421317335640100207130ustar00rootroot00000000000000synapse.rest package ==================== Submodules ---------- .. toctree:: synapse.rest.base synapse.rest.events synapse.rest.register synapse.rest.room Module contents --------------- .. automodule:: synapse.rest :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.rst000066400000000000000000000005651317335640100177450ustar00rootroot00000000000000synapse package =============== Subpackages ----------- .. toctree:: synapse.api synapse.app synapse.federation synapse.persistence synapse.rest synapse.util Submodules ---------- .. toctree:: synapse.server synapse.state Module contents --------------- .. automodule:: synapse :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.server.rst000066400000000000000000000002051317335640100212410ustar00rootroot00000000000000synapse.server module ===================== .. automodule:: synapse.server :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.state.rst000066400000000000000000000002021317335640100210500ustar00rootroot00000000000000synapse.state module ==================== .. automodule:: synapse.state :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.util.async.rst000066400000000000000000000002211317335640100220220ustar00rootroot00000000000000synapse.util.async module ========================= .. automodule:: synapse.util.async :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.util.dbutils.rst000066400000000000000000000002271317335640100223610ustar00rootroot00000000000000synapse.util.dbutils module =========================== .. automodule:: synapse.util.dbutils :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.util.http.rst000066400000000000000000000002161317335640100216700ustar00rootroot00000000000000synapse.util.http module ======================== .. automodule:: synapse.util.http :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.util.lockutils.rst000066400000000000000000000002351317335640100227230ustar00rootroot00000000000000synapse.util.lockutils module ============================= .. automodule:: synapse.util.lockutils :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.util.logutils.rst000066400000000000000000000002321317335640100225510ustar00rootroot00000000000000synapse.util.logutils module ============================ .. automodule:: synapse.util.logutils :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.util.rst000066400000000000000000000005021317335640100207100ustar00rootroot00000000000000synapse.util package ==================== Submodules ---------- .. toctree:: synapse.util.async synapse.util.http synapse.util.lockutils synapse.util.logutils synapse.util.stringutils Module contents --------------- .. automodule:: synapse.util :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/sphinx/synapse.util.stringutils.rst000066400000000000000000000002431317335640100233000ustar00rootroot00000000000000synapse.util.stringutils module =============================== .. automodule:: synapse.util.stringutils :members: :undoc-members: :show-inheritance: synapse-0.24.0/docs/tcp_replication.rst000066400000000000000000000167071317335640100201360ustar00rootroot00000000000000TCP Replication =============== Motivation ---------- Previously the workers used an HTTP long poll mechanism to get updates from the master, which had the problem of causing a lot of duplicate work on the server. This TCP protocol replaces those APIs with the aim of increased efficiency. Overview -------- The protocol is based on fire and forget, line based commands. An example flow would be (where '>' indicates master to worker and '<' worker to master flows):: > SERVER example.com < REPLICATE events 53 > RDATA events 54 ["$foo1:bar.com", ...] > RDATA events 55 ["$foo4:bar.com", ...] The example shows the server accepting a new connection and sending its identity with the ``SERVER`` command, followed by the client asking to subscribe to the ``events`` stream from the token ``53``. The server then periodically sends ``RDATA`` commands which have the format ``RDATA ``, where the format of ```` is defined by the individual streams. Error reporting happens by either the client or server sending an `ERROR` command, and usually the connection will be closed. Since the protocol is a simple line based, its possible to manually connect to the server using a tool like netcat. A few things should be noted when manually using the protocol: * When subscribing to a stream using ``REPLICATE``, the special token ``NOW`` can be used to get all future updates. The special stream name ``ALL`` can be used with ``NOW`` to subscribe to all available streams. * The federation stream is only available if federation sending has been disabled on the main process. * The server will only time connections out that have sent a ``PING`` command. If a ping is sent then the connection will be closed if no further commands are receieved within 15s. Both the client and server protocol implementations will send an initial PING on connection and ensure at least one command every 5s is sent (not necessarily ``PING``). * ``RDATA`` commands *usually* include a numeric token, however if the stream has multiple rows to replicate per token the server will send multiple ``RDATA`` commands, with all but the last having a token of ``batch``. See the documentation on ``commands.RdataCommand`` for further details. Architecture ------------ The basic structure of the protocol is line based, where the initial word of each line specifies the command. The rest of the line is parsed based on the command. For example, the `RDATA` command is defined as:: RDATA (Note that `` may contains spaces, but cannot contain newlines.) Blank lines are ignored. Keep alives ~~~~~~~~~~~ Both sides are expected to send at least one command every 5s or so, and should send a ``PING`` command if necessary. If either side do not receive a command within e.g. 15s then the connection should be closed. Because the server may be connected to manually using e.g. netcat, the timeouts aren't enabled until an initial ``PING`` command is seen. Both the client and server implementations below send a ``PING`` command immediately on connection to ensure the timeouts are enabled. This ensures that both sides can quickly realize if the tcp connection has gone and handle the situation appropriately. Start up ~~~~~~~~ When a new connection is made, the server: * Sends a ``SERVER`` command, which includes the identity of the server, allowing the client to detect if its connected to the expected server * Sends a ``PING`` command as above, to enable the client to time out connections promptly. The client: * Sends a ``NAME`` command, allowing the server to associate a human friendly name with the connection. This is optional. * Sends a ``PING`` as above * For each stream the client wishes to subscribe to it sends a ``REPLICATE`` with the stream_name and token it wants to subscribe from. * On receipt of a ``SERVER`` command, checks that the server name matches the expected server name. Error handling ~~~~~~~~~~~~~~ If either side detects an error it can send an ``ERROR`` command and close the connection. If the client side loses the connection to the server it should reconnect, following the steps above. Congestion ~~~~~~~~~~ If the server sends messages faster than the client can consume them the server will first buffer a (fairly large) number of commands and then disconnect the client. This ensures that we don't queue up an unbounded number of commands in memory and gives us a potential oppurtunity to squawk loudly. When/if the client recovers it can reconnect to the server and ask for missed messages. Reliability ~~~~~~~~~~~ In general the replication stream should be considered an unreliable transport since e.g. commands are not resent if the connection disappears. The exception to that are the replication streams, i.e. RDATA commands, since these include tokens which can be used to restart the stream on connection errors. The client should keep track of the token in the last RDATA command received for each stream so that on reconneciton it can start streaming from the correct place. Note: not all RDATA have valid tokens due to batching. See ``RdataCommand`` for more details. Example ~~~~~~~ An example iteraction is shown below. Each line is prefixed with '>' or '<' to indicate which side is sending, these are *not* included on the wire:: * connection established * > SERVER localhost:8823 > PING 1490197665618 < NAME synapse.app.appservice < PING 1490197665618 < REPLICATE events 1 < REPLICATE backfill 1 < REPLICATE caches 1 > POSITION events 1 > POSITION backfill 1 > POSITION caches 1 > RDATA caches 2 ["get_user_by_id",["@01register-user:localhost:8823"],1490197670513] > RDATA events 14 ["$149019767112vOHxz:localhost:8823", "!AFDCvgApUmpdfVjIXm:localhost:8823","m.room.guest_access","",null] < PING 1490197675618 > ERROR server stopping * connection closed by server * The ``POSITION`` command sent by the server is used to set the clients position without needing to send data with the ``RDATA`` command. An example of a batched set of ``RDATA`` is:: > RDATA caches batch ["get_user_by_id",["@test:localhost:8823"],1490197670513] > RDATA caches batch ["get_user_by_id",["@test2:localhost:8823"],1490197670513] > RDATA caches batch ["get_user_by_id",["@test3:localhost:8823"],1490197670513] > RDATA caches 54 ["get_user_by_id",["@test4:localhost:8823"],1490197670513] In this case the client shouldn't advance their caches token until it sees the the last ``RDATA``. List of commands ~~~~~~~~~~~~~~~~ The list of valid commands, with which side can send it: server (S) or client (C): SERVER (S) Sent at the start to identify which server the client is talking to RDATA (S) A single update in a stream POSITION (S) The position of the stream has been updated ERROR (S, C) There was an error PING (S, C) Sent periodically to ensure the connection is still alive NAME (C) Sent at the start by client to inform the server who they are REPLICATE (C) Asks the server to replicate a given stream USER_SYNC (C) A user has started or stopped syncing FEDERATION_ACK (C) Acknowledge receipt of some federation data REMOVE_PUSHER (C) Inform the server a pusher should be removed INVALIDATE_CACHE (C) Inform the server a cache should be invalidated SYNC (S, C) Used exclusively in tests See ``synapse/replication/tcp/commands.py`` for a detailed description and the format of each command. synapse-0.24.0/docs/turn-howto.rst000066400000000000000000000111041317335640100170670ustar00rootroot00000000000000How to enable VoIP relaying on your Home Server with TURN Overview -------- The synapse Matrix Home Server supports integration with TURN server via the TURN server REST API (http://tools.ietf.org/html/draft-uberti-behave-turn-rest-00). This allows the Home Server to generate credentials that are valid for use on the TURN server through the use of a secret shared between the Home Server and the TURN server. This document describes how to install coturn (https://github.com/coturn/coturn) which also supports the TURN REST API, and integrate it with synapse. coturn Setup ============ You may be able to setup coturn via your package manager, or set it up manually using the usual ``configure, make, make install`` process. 1. Check out coturn:: git clone https://github.com/coturn/coturn.git coturn cd coturn 2. Configure it:: ./configure You may need to install ``libevent2``: if so, you should do so in the way recommended by your operating system. You can ignore warnings about lack of database support: a database is unnecessary for this purpose. 3. Build and install it:: make make install 4. Create or edit the config file in ``/etc/turnserver.conf``. The relevant lines, with example values, are:: lt-cred-mech use-auth-secret static-auth-secret=[your secret key here] realm=turn.myserver.org See turnserver.conf for explanations of the options. One way to generate the static-auth-secret is with pwgen:: pwgen -s 64 1 5. Consider your security settings. TURN lets users request a relay which will connect to arbitrary IP addresses and ports. At the least we recommend: # VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay. no-tcp-relay # don't let the relay ever try to connect to private IP address ranges within your network (if any) # given the turn server is likely behind your firewall, remember to include any privileged public IPs too. denied-peer-ip=10.0.0.0-10.255.255.255 denied-peer-ip=192.168.0.0-192.168.255.255 denied-peer-ip=172.16.0.0-172.31.255.255 # special case the turn server itself so that client->TURN->TURN->client flows work allowed-peer-ip=10.0.0.1 # consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS. user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user. total-quota=1200 Ideally coturn should refuse to relay traffic which isn't SRTP; see https://github.com/matrix-org/synapse/issues/2009 6. Ensure your firewall allows traffic into the TURN server on the ports you've configured it to listen on (remember to allow both TCP and UDP TURN traffic) 7. If you've configured coturn to support TLS/DTLS, generate or import your private key and certificate. 8. Start the turn server:: bin/turnserver -o synapse Setup ============= Your home server configuration file needs the following extra keys: 1. "turn_uris": This needs to be a yaml list of public-facing URIs for your TURN server to be given out to your clients. Add separate entries for each transport your TURN server supports. 2. "turn_shared_secret": This is the secret shared between your Home server and your TURN server, so you should set it to the same string you used in turnserver.conf. 3. "turn_user_lifetime": This is the amount of time credentials generated by your Home Server are valid for (in milliseconds). Shorter times offer less potential for abuse at the expense of increased traffic between web clients and your home server to refresh credentials. The TURN REST API specification recommends one day (86400000). 4. "turn_allow_guests": Whether to allow guest users to use the TURN server. This is enabled by default, as otherwise VoIP will not work reliably for guests. However, it does introduce a security risk as it lets guests connect to arbitrary endpoints without having gone through a CAPTCHA or similar to register a real account. As an example, here is the relevant section of the config file for matrix.org:: turn_uris: [ "turn:turn.matrix.org:3478?transport=udp", "turn:turn.matrix.org:3478?transport=tcp" ] turn_shared_secret: n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons turn_user_lifetime: 86400000 turn_allow_guests: True Now, restart synapse:: cd /where/you/run/synapse ./synctl restart ...and your Home Server now supports VoIP relaying! synapse-0.24.0/docs/url_previews.rst000066400000000000000000000111171317335640100174730ustar00rootroot00000000000000URL Previews ============ Design notes on a URL previewing service for Matrix: Options are: 1. Have an AS which listens for URLs, downloads them, and inserts an event that describes their metadata. * Pros: * Decouples the implementation entirely from Synapse. * Uses existing Matrix events & content repo to store the metadata. * Cons: * Which AS should provide this service for a room, and why should you trust it? * Doesn't work well with E2E; you'd have to cut the AS into every room * the AS would end up subscribing to every room anyway. 2. Have a generic preview API (nothing to do with Matrix) that provides a previewing service: * Pros: * Simple and flexible; can be used by any clients at any point * Cons: * If each HS provides one of these independently, all the HSes in a room may needlessly DoS the target URI * We need somewhere to store the URL metadata rather than just using Matrix itself * We can't piggyback on matrix to distribute the metadata between HSes. 3. Make the synapse of the sending user responsible for spidering the URL and inserting an event asynchronously which describes the metadata. * Pros: * Works transparently for all clients * Piggy-backs nicely on using Matrix for distributing the metadata. * No confusion as to which AS * Cons: * Doesn't work with E2E * We might want to decouple the implementation of the spider from the HS, given spider behaviour can be quite complicated and evolve much more rapidly than the HS. It's more like a bot than a core part of the server. 4. Make the sending client use the preview API and insert the event itself when successful. * Pros: * Works well with E2E * No custom server functionality * Lets the client customise the preview that they send (like on FB) * Cons: * Entirely specific to the sending client, whereas it'd be nice if /any/ URL was correctly previewed if clients support it. 5. Have the option of specifying a shared (centralised) previewing service used by a room, to avoid all the different HSes in the room DoSing the target. Best solution is probably a combination of both 2 and 4. * Sending clients do their best to create and send a preview at the point of sending the message, perhaps delaying the message until the preview is computed? (This also lets the user validate the preview before sending) * Receiving clients have the option of going and creating their own preview if one doesn't arrive soon enough (or if the original sender didn't create one) This is a bit magical though in that the preview could come from two entirely different sources - the sending HS or your local one. However, this can always be exposed to users: "Generate your own URL previews if none are available?" This is tantamount also to senders calculating their own thumbnails for sending in advance of the main content - we are trusting the sender not to lie about the content in the thumbnail. Whereas currently thumbnails are calculated by the receiving homeserver to avoid this attack. However, this kind of phishing attack does exist whether we let senders pick their thumbnails or not, in that a malicious sender can send normal text messages around the attachment claiming it to be legitimate. We could rely on (future) reputation/abuse management to punish users who phish (be it with bogus metadata or bogus descriptions). Bogus metadata is particularly bad though, especially if it's avoidable. As a first cut, let's do #2 and have the receiver hit the API to calculate its own previews (as it does currently for image thumbnails). We can then extend/optimise this to option 4 as a special extra if needed. API --- GET /_matrix/media/r0/preview_url?url=http://wherever.com 200 OK { "og:type" : "article" "og:url" : "https://twitter.com/matrixdotorg/status/684074366691356672" "og:title" : "Matrix on Twitter" "og:image" : "https://pbs.twimg.com/profile_images/500400952029888512/yI0qtFi7_400x400.png" "og:description" : "“Synapse 0.12 is out! Lots of polishing, performance &amp; bugfixes: /sync API, /r0 prefix, fulltext search, 3PID invites https://t.co/5alhXLLEGP”" "og:site_name" : "Twitter" } * Downloads the URL * If HTML, just stores it in RAM and parses it for OG meta tags * Download any media OG meta tags to the media repo, and refer to them in the OG via mxc:// URIs. * If a media filetype we know we can thumbnail: store it on disk, and hand it to the thumbnailer. Generate OG meta tags from the thumbnailer contents. * Otherwise, don't bother downloading further. synapse-0.24.0/docs/workers.rst000066400000000000000000000077661317335640100164600ustar00rootroot00000000000000Scaling synapse via workers --------------------------- Synapse has experimental support for splitting out functionality into multiple separate python processes, helping greatly with scalability. These processes are called 'workers', and are (eventually) intended to scale horizontally independently. All processes continue to share the same database instance, and as such, workers only work with postgres based synapse deployments (sharing a single sqlite across multiple processes is a recipe for disaster, plus you should be using postgres anyway if you care about scalability). The workers communicate with the master synapse process via a synapse-specific TCP protocol called 'replication' - analogous to MySQL or Postgres style database replication; feeding a stream of relevant data to the workers so they can be kept in sync with the main synapse process and database state. To enable workers, you need to add a replication listener to the master synapse, e.g.:: listeners: - port: 9092 bind_address: '127.0.0.1' type: replication Under **no circumstances** should this replication API listener be exposed to the public internet; it currently implements no authentication whatsoever and is unencrypted. You then create a set of configs for the various worker processes. These should be worker configuration files should be stored in a dedicated subdirectory, to allow synctl to manipulate them. The current available worker applications are: * synapse.app.pusher - handles sending push notifications to sygnal and email * synapse.app.synchrotron - handles /sync endpoints. can scales horizontally through multiple instances. * synapse.app.appservice - handles output traffic to Application Services * synapse.app.federation_reader - handles receiving federation traffic (including public_rooms API) * synapse.app.media_repository - handles the media repository. * synapse.app.client_reader - handles client API endpoints like /publicRooms Each worker configuration file inherits the configuration of the main homeserver configuration file. You can then override configuration specific to that worker, e.g. the HTTP listener that it provides (if any); logging configuration; etc. You should minimise the number of overrides though to maintain a usable config. You must specify the type of worker application (worker_app) and the replication endpoint that it's talking to on the main synapse process (worker_replication_host and worker_replication_port). For instance:: worker_app: synapse.app.synchrotron # The replication listener on the synapse to talk to. worker_replication_host: 127.0.0.1 worker_replication_port: 9092 worker_listeners: - type: http port: 8083 resources: - names: - client worker_daemonize: True worker_pid_file: /home/matrix/synapse/synchrotron.pid worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml ...is a full configuration for a synchrotron worker instance, which will expose a plain HTTP /sync endpoint on port 8083 separately from the /sync endpoint provided by the main synapse. Obviously you should configure your loadbalancer to route the /sync endpoint to the synchrotron instance(s) in this instance. Finally, to actually run your worker-based synapse, you must pass synctl the -a commandline option to tell it to operate on all the worker configurations found in the given directory, e.g.:: synctl -a $CONFIG/workers start Currently one should always restart all workers when restarting or upgrading synapse, unless you explicitly know it's safe not to. For instance, restarting synapse without restarting all the synchrotrons may result in broken typing notifications. To manipulate a specific worker, you pass the -w option to synctl:: synctl -w $CONFIG/workers/synchrotron.yaml restart All of the above is highly experimental and subject to change as Synapse evolves, but documenting it here to help folks needing highly scalable Synapses similar to the one running matrix.org! synapse-0.24.0/jenkins-dendron-haproxy-postgres.sh000077500000000000000000000011541317335640100222460ustar00rootroot00000000000000#!/bin/bash set -eux : ${WORKSPACE:="$(pwd)"} export WORKSPACE export PYTHONDONTWRITEBYTECODE=yep export SYNAPSE_CACHE_FACTOR=1 export HAPROXY_BIN=/home/haproxy/haproxy-1.6.11/haproxy ./jenkins/prepare_synapse.sh ./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git ./jenkins/clone.sh dendron https://github.com/matrix-org/dendron.git ./dendron/jenkins/build_dendron.sh ./sytest/jenkins/prep_sytest_for_postgres.sh ./sytest/jenkins/install_and_run.sh \ --python $WORKSPACE/.tox/py27/bin/python \ --synapse-directory $WORKSPACE \ --dendron $WORKSPACE/dendron/bin/dendron \ --haproxy \ synapse-0.24.0/jenkins-dendron-postgres.sh000077500000000000000000000010431317335640100205530ustar00rootroot00000000000000#!/bin/bash set -eux : ${WORKSPACE:="$(pwd)"} export WORKSPACE export PYTHONDONTWRITEBYTECODE=yep export SYNAPSE_CACHE_FACTOR=1 ./jenkins/prepare_synapse.sh ./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git ./jenkins/clone.sh dendron https://github.com/matrix-org/dendron.git ./dendron/jenkins/build_dendron.sh ./sytest/jenkins/prep_sytest_for_postgres.sh ./sytest/jenkins/install_and_run.sh \ --python $WORKSPACE/.tox/py27/bin/python \ --synapse-directory $WORKSPACE \ --dendron $WORKSPACE/dendron/bin/dendron \ synapse-0.24.0/jenkins-flake8.sh000077500000000000000000000011341317335640100164310ustar00rootroot00000000000000#!/bin/bash set -eux : ${WORKSPACE:="$(pwd)"} export PYTHONDONTWRITEBYTECODE=yep export SYNAPSE_CACHE_FACTOR=1 # Output test results as junit xml export TRIAL_FLAGS="--reporter=subunit" export TOXSUFFIX="| subunit-1to2 | subunit2junitxml --no-passthrough --output-to=results.xml" # Write coverage reports to a separate file for each process export COVERAGE_OPTS="-p" export DUMP_COVERAGE_COMMAND="coverage help" # Output flake8 violations to violations.flake8.log export PEP8SUFFIX="--output-file=violations.flake8.log" rm .coverage* || echo "No coverage files to remove" tox -e packaging -e pep8 synapse-0.24.0/jenkins-postgres.sh000077500000000000000000000006151317335640100171300ustar00rootroot00000000000000#!/bin/bash set -eux : ${WORKSPACE:="$(pwd)"} export WORKSPACE export PYTHONDONTWRITEBYTECODE=yep export SYNAPSE_CACHE_FACTOR=1 ./jenkins/prepare_synapse.sh ./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git ./sytest/jenkins/prep_sytest_for_postgres.sh ./sytest/jenkins/install_and_run.sh \ --python $WORKSPACE/.tox/py27/bin/python \ --synapse-directory $WORKSPACE \ synapse-0.24.0/jenkins-sqlite.sh000077500000000000000000000005371317335640100165660ustar00rootroot00000000000000#!/bin/bash set -eux : ${WORKSPACE:="$(pwd)"} export WORKSPACE export PYTHONDONTWRITEBYTECODE=yep export SYNAPSE_CACHE_FACTOR=1 ./jenkins/prepare_synapse.sh ./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git ./sytest/jenkins/install_and_run.sh \ --python $WORKSPACE/.tox/py27/bin/python \ --synapse-directory $WORKSPACE \ synapse-0.24.0/jenkins-unittests.sh000077500000000000000000000016611317335640100173260ustar00rootroot00000000000000#!/bin/bash set -eux : ${WORKSPACE:="$(pwd)"} export PYTHONDONTWRITEBYTECODE=yep export SYNAPSE_CACHE_FACTOR=1 # Output test results as junit xml export TRIAL_FLAGS="--reporter=subunit" export TOXSUFFIX="| subunit-1to2 | subunit2junitxml --no-passthrough --output-to=results.xml" # Write coverage reports to a separate file for each process export COVERAGE_OPTS="-p" export DUMP_COVERAGE_COMMAND="coverage help" # Output flake8 violations to violations.flake8.log # Don't exit with non-0 status code on Jenkins, # so that the build steps continue and a later step can decided whether to # UNSTABLE or FAILURE this build. export PEP8SUFFIX="--output-file=violations.flake8.log || echo flake8 finished with status code \$?" rm .coverage* || echo "No coverage files to remove" tox --notest -e py27 TOX_BIN=$WORKSPACE/.tox/py27/bin python synapse/python_dependencies.py | xargs -n1 $TOX_BIN/pip install $TOX_BIN/pip install lxml tox -e py27 synapse-0.24.0/jenkins/000077500000000000000000000000001317335640100147235ustar00rootroot00000000000000synapse-0.24.0/jenkins/clone.sh000077500000000000000000000026401317335640100163640ustar00rootroot00000000000000#! /bin/bash # This clones a project from github into a named subdirectory # If the project has a branch with the same name as this branch # then it will checkout that branch after cloning. # Otherwise it will checkout "origin/develop." # The first argument is the name of the directory to checkout # the branch into. # The second argument is the URL of the remote repository to checkout. # Usually something like https://github.com/matrix-org/sytest.git set -eux NAME=$1 PROJECT=$2 BASE=".$NAME-base" # Update our mirror. if [ ! -d ".$NAME-base" ]; then # Create a local mirror of the source repository. # This saves us from having to download the entire repository # when this script is next run. git clone "$PROJECT" "$BASE" --mirror else # Fetch any updates from the source repository. (cd "$BASE"; git fetch -p) fi # Remove the existing repository so that we have a clean copy rm -rf "$NAME" # Cloning with --shared means that we will share portions of the # .git directory with our local mirror. git clone "$BASE" "$NAME" --shared # Jenkins may have supplied us with the name of the branch in the # environment. Otherwise we will have to guess based on the current # commit. : ${GIT_BRANCH:="origin/$(git rev-parse --abbrev-ref HEAD)"} cd "$NAME" # check out the relevant branch git checkout "${GIT_BRANCH}" || ( echo >&2 "No ref ${GIT_BRANCH} found, falling back to develop" git checkout "origin/develop" ) synapse-0.24.0/jenkins/prepare_synapse.sh000077500000000000000000000005261317335640100204650ustar00rootroot00000000000000#! /bin/bash cd "`dirname $0`/.." TOX_DIR=$WORKSPACE/.tox mkdir -p $TOX_DIR if ! [ $TOX_DIR -ef .tox ]; then ln -s "$TOX_DIR" .tox fi # set up the virtualenv tox -e py27 --notest -v TOX_BIN=$TOX_DIR/py27/bin $TOX_BIN/pip install setuptools { python synapse/python_dependencies.py echo lxml psycopg2 } | xargs $TOX_BIN/pip install synapse-0.24.0/pylint.cfg000066400000000000000000000207361317335640100152720ustar00rootroot00000000000000[MASTER] # Specify a configuration file. #rcfile= # Python code to execute, usually for sys.path manipulation such as # pygtk.require(). #init-hook= # Profiled execution. profile=no # Add files or directories to the blacklist. They should be base names, not # paths. ignore=CVS # Pickle collected data for later comparisons. persistent=yes # List of plugins (as comma separated values of python modules names) to load, # usually to register additional checkers. load-plugins= [MESSAGES CONTROL] # Enable the message, report, category or checker with the given id(s). You can # either give multiple identifier separated by comma (,) or put this option # multiple time. See also the "--disable" option for examples. #enable= # Disable the message, report, category or checker with the given id(s). You # can either give multiple identifiers separated by comma (,) or put this # option multiple times (only on the command line, not in the configuration # file where it should appear only once).You can also use "--disable=all" to # disable everything first and then reenable specific checks. For example, if # you want to run only the similarities checker, you can use "--disable=all # --enable=similarities". If you want to run only the classes checker, but have # no Warning level messages displayed, use"--disable=all --enable=classes # --disable=W" disable=missing-docstring [REPORTS] # Set the output format. Available formats are text, parseable, colorized, msvs # (visual studio) and html. You can also give a reporter class, eg # mypackage.mymodule.MyReporterClass. output-format=text # Put messages in a separate file for each module / package specified on the # command line instead of printing them on stdout. Reports (if any) will be # written in a file name "pylint_global.[txt|html]". files-output=no # Tells whether to display a full report or only the messages reports=yes # Python expression which should return a note less than 10 (10 is the highest # note). You have access to the variables errors warning, statement which # respectively contain the number of errors / warnings messages and the total # number of statements analyzed. This is used by the global evaluation report # (RP0004). evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10) # Add a comment according to your evaluation note. This is used by the global # evaluation report (RP0004). comment=no # Template used to display messages. This is a python new-style format string # used to format the message information. See doc for all details #msg-template= [TYPECHECK] # Tells whether missing members accessed in mixin class should be ignored. A # mixin class is detected if its name ends with "mixin" (case insensitive). ignore-mixin-members=yes # List of classes names for which member attributes should not be checked # (useful for classes with attributes dynamically set). ignored-classes=SQLObject # When zope mode is activated, add a predefined set of Zope acquired attributes # to generated-members. zope=no # List of members which are set dynamically and missed by pylint inference # system, and so shouldn't trigger E0201 when accessed. Python regular # expressions are accepted. generated-members=REQUEST,acl_users,aq_parent [MISCELLANEOUS] # List of note tags to take in consideration, separated by a comma. notes=FIXME,XXX,TODO [SIMILARITIES] # Minimum lines number of a similarity. min-similarity-lines=4 # Ignore comments when computing similarities. ignore-comments=yes # Ignore docstrings when computing similarities. ignore-docstrings=yes # Ignore imports when computing similarities. ignore-imports=no [VARIABLES] # Tells whether we should check for unused import in __init__ files. init-import=no # A regular expression matching the beginning of the name of dummy variables # (i.e. not used). dummy-variables-rgx=_$|dummy # List of additional names supposed to be defined in builtins. Remember that # you should avoid to define new builtins when possible. additional-builtins= [BASIC] # Required attributes for module, separated by a comma required-attributes= # List of builtins function names that should not be used, separated by a comma bad-functions=map,filter,apply,input # Regular expression which should only match correct module names module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$ # Regular expression which should only match correct module level names const-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__))$ # Regular expression which should only match correct class names class-rgx=[A-Z_][a-zA-Z0-9]+$ # Regular expression which should only match correct function names function-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct method names method-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct instance attribute names attr-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct argument names argument-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct variable names variable-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct attribute names in class # bodies class-attribute-rgx=([A-Za-z_][A-Za-z0-9_]{2,30}|(__.*__))$ # Regular expression which should only match correct list comprehension / # generator expression variable names inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$ # Good variable names which should always be accepted, separated by a comma good-names=i,j,k,ex,Run,_ # Bad variable names which should always be refused, separated by a comma bad-names=foo,bar,baz,toto,tutu,tata # Regular expression which should only match function or class names that do # not require a docstring. no-docstring-rgx=__.*__ # Minimum line length for functions/classes that require docstrings, shorter # ones are exempt. docstring-min-length=-1 [FORMAT] # Maximum number of characters on a single line. max-line-length=80 # Regexp for a line that is allowed to be longer than the limit. ignore-long-lines=^\s*(# )??$ # Allow the body of an if to be on the same line as the test if there is no # else. single-line-if-stmt=no # List of optional constructs for which whitespace checking is disabled no-space-check=trailing-comma,dict-separator # Maximum number of lines in a module max-module-lines=1000 # String used as indentation unit. This is usually " " (4 spaces) or "\t" (1 # tab). indent-string=' ' [DESIGN] # Maximum number of arguments for function / method max-args=5 # Argument names that match this expression will be ignored. Default to name # with leading underscore ignored-argument-names=_.* # Maximum number of locals for function / method body max-locals=15 # Maximum number of return / yield for function / method body max-returns=6 # Maximum number of branch for function / method body max-branches=12 # Maximum number of statements in function / method body max-statements=50 # Maximum number of parents for a class (see R0901). max-parents=7 # Maximum number of attributes for a class (see R0902). max-attributes=7 # Minimum number of public methods for a class (see R0903). min-public-methods=2 # Maximum number of public methods for a class (see R0904). max-public-methods=20 [IMPORTS] # Deprecated modules which should not be used, separated by a comma deprecated-modules=regsub,TERMIOS,Bastion,rexec # Create a graph of every (i.e. internal and external) dependencies in the # given file (report RP0402 must not be disabled) import-graph= # Create a graph of external dependencies in the given file (report RP0402 must # not be disabled) ext-import-graph= # Create a graph of internal dependencies in the given file (report RP0402 must # not be disabled) int-import-graph= [CLASSES] # List of interface methods to ignore, separated by a comma. This is used for # instance to not check methods defines in Zope's Interface base class. ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by # List of method names used to declare (i.e. assign) instance attributes. defining-attr-methods=__init__,__new__,setUp # List of valid names for the first argument in a class method. valid-classmethod-first-arg=cls # List of valid names for the first argument in a metaclass class method. valid-metaclass-classmethod-first-arg=mcs [EXCEPTIONS] # Exceptions that will emit a warning when being caught. Defaults to # "Exception" overgeneral-exceptions=Exception synapse-0.24.0/res/000077500000000000000000000000001317335640100140535ustar00rootroot00000000000000synapse-0.24.0/res/templates/000077500000000000000000000000001317335640100160515ustar00rootroot00000000000000synapse-0.24.0/res/templates/mail-Vector.css000066400000000000000000000001741317335640100207470ustar00rootroot00000000000000.header { border-bottom: 4px solid #e4f7ed ! important; } .notif_link a, .footer a { color: #76CFA6 ! important; } synapse-0.24.0/res/templates/mail.css000066400000000000000000000043141317335640100175070ustar00rootroot00000000000000body { margin: 0px; } pre, code { word-break: break-word; white-space: pre-wrap; } #page { font-family: 'Open Sans', Helvetica, Arial, Sans-Serif; font-color: #454545; font-size: 12pt; width: 100%; padding: 20px; } #inner { width: 640px; } .header { width: 100%; height: 87px; color: #454545; border-bottom: 4px solid #e5e5e5; } .logo { text-align: right; margin-left: 20px; } .salutation { padding-top: 10px; font-weight: bold; } .summarytext { } .room { width: 100%; color: #454545; border-bottom: 1px solid #e5e5e5; } .room_header td { padding-top: 38px; padding-bottom: 10px; border-bottom: 1px solid #e5e5e5; } .room_name { vertical-align: middle; font-size: 18px; font-weight: bold; } .room_header h2 { margin-top: 0px; margin-left: 75px; font-size: 20px; } .room_avatar { width: 56px; line-height: 0px; text-align: center; vertical-align: middle; } .room_avatar img { width: 48px; height: 48px; object-fit: cover; border-radius: 24px; } .notif { border-bottom: 1px solid #e5e5e5; margin-top: 16px; padding-bottom: 16px; } .historical_message .sender_avatar { opacity: 0.3; } /* spell out opacity and historical_message class names for Outlook aka Word */ .historical_message .sender_name { color: #e3e3e3; } .historical_message .message_time { color: #e3e3e3; } .historical_message .message_body { color: #c7c7c7; } .historical_message td, .message td { padding-top: 10px; } .sender_avatar { width: 56px; text-align: center; vertical-align: top; } .sender_avatar img { margin-top: -2px; width: 32px; height: 32px; border-radius: 16px; } .sender_name { display: inline; font-size: 13px; color: #a2a2a2; } .message_time { text-align: right; width: 100px; font-size: 11px; color: #a2a2a2; } .message_body { } .notif_link td { padding-top: 10px; padding-bottom: 10px; font-weight: bold; } .notif_link a, .footer a { color: #454545; text-decoration: none; } .debug { font-size: 10px; color: #888; } .footer { margin-top: 20px; text-align: center; }synapse-0.24.0/res/templates/notif.html000066400000000000000000000043231317335640100200600ustar00rootroot00000000000000{% for message in notif.messages %} {% if loop.index0 == 0 or notif.messages[loop.index0 - 1].sender_name != notif.messages[loop.index0].sender_name %} {% if message.sender_avatar_url %} {% else %} {% if message.sender_hash % 3 == 0 %} {% elif message.sender_hash % 3 == 1 %} {% else %} {% endif %} {% endif %} {% endif %} {% if loop.index0 == 0 or notif.messages[loop.index0 - 1].sender_name != notif.messages[loop.index0].sender_name %}
{% if message.msgtype == "m.emote" %}*{% endif %} {{ message.sender_name }}
{% endif %}
{% if message.msgtype == "m.text" %} {{ message.body_text_html }} {% elif message.msgtype == "m.emote" %} {{ message.body_text_html }} {% elif message.msgtype == "m.notice" %} {{ message.body_text_html }} {% elif message.msgtype == "m.image" %} {% elif message.msgtype == "m.file" %} {{ message.body_text_plain }} {% endif %}
{{ message.ts|format_ts("%H:%M") }} {% endfor %} View {{ room.title }} synapse-0.24.0/res/templates/notif.txt000066400000000000000000000010651317335640100177330ustar00rootroot00000000000000{% for message in notif.messages %} {% if message.msgtype == "m.emote" %}* {% endif %}{{ message.sender_name }} ({{ message.ts|format_ts("%H:%M") }}) {% if message.msgtype == "m.text" %} {{ message.body_text_plain }} {% elif message.msgtype == "m.emote" %} {{ message.body_text_plain }} {% elif message.msgtype == "m.notice" %} {{ message.body_text_plain }} {% elif message.msgtype == "m.image" %} {{ message.body_text_plain }} {% elif message.msgtype == "m.file" %} {{ message.body_text_plain }} {% endif %} {% endfor %} View {{ room.title }} at {{ notif.link }} synapse-0.24.0/res/templates/notif_mail.html000066400000000000000000000053461317335640100210700ustar00rootroot00000000000000
Hi {{ user_display_name }},
{{ summary_text }}
{% for room in rooms %} {% include 'room.html' with context %} {% endfor %}
synapse-0.24.0/res/templates/notif_mail.txt000066400000000000000000000002741317335640100207360ustar00rootroot00000000000000Hi {{ user_display_name }}, {{ summary_text }} {% for room in rooms %} {% include 'room.txt' with context %} {% endfor %} You can disable these notifications at {{ unsubscribe_link }} synapse-0.24.0/res/templates/room.html000066400000000000000000000021041317335640100177100ustar00rootroot00000000000000 {% if room.invite %} {% else %} {% for notif in room.notifs %} {% include 'notif.html' with context %} {% endfor %} {% endif %}
{% if room.avatar_url %} {% else %} {% if room.hash % 3 == 0 %} {% elif room.hash % 3 == 1 %} {% else %} {% endif %} {% endif %} {{ room.title }}
Join the conversation.
synapse-0.24.0/res/templates/room.txt000066400000000000000000000003221317335640100175630ustar00rootroot00000000000000{{ room.title }} {% if room.invite %} You've been invited, join at {{ room.link }} {% else %} {% for notif in room.notifs %} {% include 'notif.txt' with context %} {% endfor %} {% endif %} synapse-0.24.0/scripts-dev/000077500000000000000000000000001317335640100155255ustar00rootroot00000000000000synapse-0.24.0/scripts-dev/check_auth.py000066400000000000000000000030051317335640100201730ustar00rootroot00000000000000from synapse.events import FrozenEvent from synapse.api.auth import Auth from mock import Mock import argparse import itertools import json import sys def check_auth(auth, auth_chain, events): auth_chain.sort(key=lambda e: e.depth) auth_map = { e.event_id: e for e in auth_chain } create_events = {} for e in auth_chain: if e.type == "m.room.create": create_events[e.room_id] = e for e in itertools.chain(auth_chain, events): auth_events_list = [auth_map[i] for i, _ in e.auth_events] auth_events = { (e.type, e.state_key): e for e in auth_events_list } auth_events[("m.room.create", "")] = create_events[e.room_id] try: auth.check(e, auth_events=auth_events) except Exception as ex: print "Failed:", e.event_id, e.type, e.state_key print "Auth_events:", auth_events print ex print json.dumps(e.get_dict(), sort_keys=True, indent=4) # raise print "Success:", e.event_id, e.type, e.state_key if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument( 'json', nargs='?', type=argparse.FileType('r'), default=sys.stdin, ) args = parser.parse_args() js = json.load(args.json) auth = Auth(Mock()) check_auth( auth, [FrozenEvent(d) for d in js["auth_chain"]], [FrozenEvent(d) for d in js.get("pdus", [])], ) synapse-0.24.0/scripts-dev/check_event_hash.py000066400000000000000000000023351317335640100213630ustar00rootroot00000000000000from synapse.crypto.event_signing import * from unpaddedbase64 import encode_base64 import argparse import hashlib import sys import json class dictobj(dict): def __init__(self, *args, **kargs): dict.__init__(self, *args, **kargs) self.__dict__ = self def get_dict(self): return dict(self) def get_full_dict(self): return dict(self) def get_pdu_json(self): return dict(self) def main(): parser = argparse.ArgumentParser() parser.add_argument("input_json", nargs="?", type=argparse.FileType('r'), default=sys.stdin) args = parser.parse_args() logging.basicConfig() event_json = dictobj(json.load(args.input_json)) algorithms = { "sha256": hashlib.sha256, } for alg_name in event_json.hashes: if check_event_content_hash(event_json, algorithms[alg_name]): print "PASS content hash %s" % (alg_name,) else: print "FAIL content hash %s" % (alg_name,) for algorithm in algorithms.values(): name, h_bytes = compute_event_reference_hash(event_json, algorithm) print "Reference hash %s: %s" % (name, encode_base64(h_bytes)) if __name__=="__main__": main() synapse-0.24.0/scripts-dev/check_signature.py000066400000000000000000000042671317335640100212460ustar00rootroot00000000000000 from signedjson.sign import verify_signed_json from signedjson.key import decode_verify_key_bytes, write_signing_keys from unpaddedbase64 import decode_base64 import urllib2 import json import sys import dns.resolver import pprint import argparse import logging def get_targets(server_name): if ":" in server_name: target, port = server_name.split(":") yield (target, int(port)) return try: answers = dns.resolver.query("_matrix._tcp." + server_name, "SRV") for srv in answers: yield (srv.target, srv.port) except dns.resolver.NXDOMAIN: yield (server_name, 8448) def get_server_keys(server_name, target, port): url = "https://%s:%i/_matrix/key/v1" % (target, port) keys = json.load(urllib2.urlopen(url)) verify_keys = {} for key_id, key_base64 in keys["verify_keys"].items(): verify_key = decode_verify_key_bytes(key_id, decode_base64(key_base64)) verify_signed_json(keys, server_name, verify_key) verify_keys[key_id] = verify_key return verify_keys def main(): parser = argparse.ArgumentParser() parser.add_argument("signature_name") parser.add_argument("input_json", nargs="?", type=argparse.FileType('r'), default=sys.stdin) args = parser.parse_args() logging.basicConfig() server_name = args.signature_name keys = {} for target, port in get_targets(server_name): try: keys = get_server_keys(server_name, target, port) print "Using keys from https://%s:%s/_matrix/key/v1" % (target, port) write_signing_keys(sys.stdout, keys.values()) break except: logging.exception("Error talking to %s:%s", target, port) json_to_check = json.load(args.input_json) print "Checking JSON:" for key_id in json_to_check["signatures"][args.signature_name]: try: key = keys[key_id] verify_signed_json(json_to_check, args.signature_name, key) print "PASS %s" % (key_id,) except: logging.exception("Check for key %s failed" % (key_id,)) print "FAIL %s" % (key_id,) if __name__ == '__main__': main() synapse-0.24.0/scripts-dev/convert_server_keys.py000066400000000000000000000065601317335640100222070ustar00rootroot00000000000000import psycopg2 import yaml import sys import json import time import hashlib from unpaddedbase64 import encode_base64 from signedjson.key import read_signing_keys from signedjson.sign import sign_json from canonicaljson import encode_canonical_json def select_v1_keys(connection): cursor = connection.cursor() cursor.execute("SELECT server_name, key_id, verify_key FROM server_signature_keys") rows = cursor.fetchall() cursor.close() results = {} for server_name, key_id, verify_key in rows: results.setdefault(server_name, {})[key_id] = encode_base64(verify_key) return results def select_v1_certs(connection): cursor = connection.cursor() cursor.execute("SELECT server_name, tls_certificate FROM server_tls_certificates") rows = cursor.fetchall() cursor.close() results = {} for server_name, tls_certificate in rows: results[server_name] = tls_certificate return results def select_v2_json(connection): cursor = connection.cursor() cursor.execute("SELECT server_name, key_id, key_json FROM server_keys_json") rows = cursor.fetchall() cursor.close() results = {} for server_name, key_id, key_json in rows: results.setdefault(server_name, {})[key_id] = json.loads(str(key_json).decode("utf-8")) return results def convert_v1_to_v2(server_name, valid_until, keys, certificate): return { "old_verify_keys": {}, "server_name": server_name, "verify_keys": { key_id: {"key": key} for key_id, key in keys.items() }, "valid_until_ts": valid_until, "tls_fingerprints": [fingerprint(certificate)], } def fingerprint(certificate): finger = hashlib.sha256(certificate) return {"sha256": encode_base64(finger.digest())} def rows_v2(server, json): valid_until = json["valid_until_ts"] key_json = encode_canonical_json(json) for key_id in json["verify_keys"]: yield (server, key_id, "-", valid_until, valid_until, buffer(key_json)) def main(): config = yaml.load(open(sys.argv[1])) valid_until = int(time.time() / (3600 * 24)) * 1000 * 3600 * 24 server_name = config["server_name"] signing_key = read_signing_keys(open(config["signing_key_path"]))[0] database = config["database"] assert database["name"] == "psycopg2", "Can only convert for postgresql" args = database["args"] args.pop("cp_max") args.pop("cp_min") connection = psycopg2.connect(**args) keys = select_v1_keys(connection) certificates = select_v1_certs(connection) json = select_v2_json(connection) result = {} for server in keys: if not server in json: v2_json = convert_v1_to_v2( server, valid_until, keys[server], certificates[server] ) v2_json = sign_json(v2_json, server_name, signing_key) result[server] = v2_json yaml.safe_dump(result, sys.stdout, default_flow_style=False) rows = list( row for server, json in result.items() for row in rows_v2(server, json) ) cursor = connection.cursor() cursor.executemany( "INSERT INTO server_keys_json (" " server_name, key_id, from_server," " ts_added_ms, ts_valid_until_ms, key_json" ") VALUES (%s, %s, %s, %s, %s, %s)", rows ) connection.commit() if __name__ == '__main__': main() synapse-0.24.0/scripts-dev/copyrighter-sql.pl000077500000000000000000000024061317335640100212230ustar00rootroot00000000000000#!/usr/bin/perl -pi # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. $copyright = <table-save.sql .dump users .dump access_tokens .dump presence .dump profiles EOF synapse-0.24.0/scripts-dev/definitions.py000077500000000000000000000145241317335640100204230ustar00rootroot00000000000000#! /usr/bin/python import ast import yaml class DefinitionVisitor(ast.NodeVisitor): def __init__(self): super(DefinitionVisitor, self).__init__() self.functions = {} self.classes = {} self.names = {} self.attrs = set() self.definitions = { 'def': self.functions, 'class': self.classes, 'names': self.names, 'attrs': self.attrs, } def visit_Name(self, node): self.names.setdefault(type(node.ctx).__name__, set()).add(node.id) def visit_Attribute(self, node): self.attrs.add(node.attr) for child in ast.iter_child_nodes(node): self.visit(child) def visit_ClassDef(self, node): visitor = DefinitionVisitor() self.classes[node.name] = visitor.definitions for child in ast.iter_child_nodes(node): visitor.visit(child) def visit_FunctionDef(self, node): visitor = DefinitionVisitor() self.functions[node.name] = visitor.definitions for child in ast.iter_child_nodes(node): visitor.visit(child) def non_empty(defs): functions = {name: non_empty(f) for name, f in defs['def'].items()} classes = {name: non_empty(f) for name, f in defs['class'].items()} result = {} if functions: result['def'] = functions if classes: result['class'] = classes names = defs['names'] uses = [] for name in names.get('Load', ()): if name not in names.get('Param', ()) and name not in names.get('Store', ()): uses.append(name) uses.extend(defs['attrs']) if uses: result['uses'] = uses result['names'] = names result['attrs'] = defs['attrs'] return result def definitions_in_code(input_code): input_ast = ast.parse(input_code) visitor = DefinitionVisitor() visitor.visit(input_ast) definitions = non_empty(visitor.definitions) return definitions def definitions_in_file(filepath): with open(filepath) as f: return definitions_in_code(f.read()) def defined_names(prefix, defs, names): for name, funcs in defs.get('def', {}).items(): names.setdefault(name, {'defined': []})['defined'].append(prefix + name) defined_names(prefix + name + ".", funcs, names) for name, funcs in defs.get('class', {}).items(): names.setdefault(name, {'defined': []})['defined'].append(prefix + name) defined_names(prefix + name + ".", funcs, names) def used_names(prefix, item, defs, names): for name, funcs in defs.get('def', {}).items(): used_names(prefix + name + ".", name, funcs, names) for name, funcs in defs.get('class', {}).items(): used_names(prefix + name + ".", name, funcs, names) path = prefix.rstrip('.') for used in defs.get('uses', ()): if used in names: if item: names[item].setdefault('uses', []).append(used) names[used].setdefault('used', {}).setdefault(item, []).append(path) if __name__ == '__main__': import sys, os, argparse, re parser = argparse.ArgumentParser(description='Find definitions.') parser.add_argument( "--unused", action="store_true", help="Only list unused definitions" ) parser.add_argument( "--ignore", action="append", metavar="REGEXP", help="Ignore a pattern" ) parser.add_argument( "--pattern", action="append", metavar="REGEXP", help="Search for a pattern" ) parser.add_argument( "directories", nargs='+', metavar="DIR", help="Directories to search for definitions" ) parser.add_argument( "--referrers", default=0, type=int, help="Include referrers up to the given depth" ) parser.add_argument( "--referred", default=0, type=int, help="Include referred down to the given depth" ) parser.add_argument( "--format", default="yaml", help="Output format, one of 'yaml' or 'dot'" ) args = parser.parse_args() definitions = {} for directory in args.directories: for root, dirs, files in os.walk(directory): for filename in files: if filename.endswith(".py"): filepath = os.path.join(root, filename) definitions[filepath] = definitions_in_file(filepath) names = {} for filepath, defs in definitions.items(): defined_names(filepath + ":", defs, names) for filepath, defs in definitions.items(): used_names(filepath + ":", None, defs, names) patterns = [re.compile(pattern) for pattern in args.pattern or ()] ignore = [re.compile(pattern) for pattern in args.ignore or ()] result = {} for name, definition in names.items(): if patterns and not any(pattern.match(name) for pattern in patterns): continue if ignore and any(pattern.match(name) for pattern in ignore): continue if args.unused and definition.get('used'): continue result[name] = definition referrer_depth = args.referrers referrers = set() while referrer_depth: referrer_depth -= 1 for entry in result.values(): for used_by in entry.get("used", ()): referrers.add(used_by) for name, definition in names.items(): if not name in referrers: continue if ignore and any(pattern.match(name) for pattern in ignore): continue result[name] = definition referred_depth = args.referred referred = set() while referred_depth: referred_depth -= 1 for entry in result.values(): for uses in entry.get("uses", ()): referred.add(uses) for name, definition in names.items(): if not name in referred: continue if ignore and any(pattern.match(name) for pattern in ignore): continue result[name] = definition if args.format == 'yaml': yaml.dump(result, sys.stdout, default_flow_style=False) elif args.format == 'dot': print "digraph {" for name, entry in result.items(): print name for used_by in entry.get("used", ()): if used_by in result: print used_by, "->", name print "}" else: raise ValueError("Unknown format %r" % (args.format)) synapse-0.24.0/scripts-dev/dump_macaroon.py000077500000000000000000000010231317335640100207220ustar00rootroot00000000000000#!/usr/bin/env python2 import pymacaroons import sys if len(sys.argv) == 1: sys.stderr.write("usage: %s macaroon [key]\n" % (sys.argv[0],)) sys.exit(1) macaroon_string = sys.argv[1] key = sys.argv[2] if len(sys.argv) > 2 else None macaroon = pymacaroons.Macaroon.deserialize(macaroon_string) print macaroon.inspect() print "" verifier = pymacaroons.Verifier() verifier.satisfy_general(lambda c: True) try: verifier.verify(macaroon, key) print "Signature is correct" except Exception as e: print e.message synapse-0.24.0/scripts-dev/federation_client.py000077500000000000000000000147251317335640100215710ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import print_function import argparse import nacl.signing import json import base64 import requests import sys import srvlookup import yaml def encode_base64(input_bytes): """Encode bytes as a base64 string without any padding.""" input_len = len(input_bytes) output_len = 4 * ((input_len + 2) // 3) + (input_len + 2) % 3 - 2 output_bytes = base64.b64encode(input_bytes) output_string = output_bytes[:output_len].decode("ascii") return output_string def decode_base64(input_string): """Decode a base64 string to bytes inferring padding from the length of the string.""" input_bytes = input_string.encode("ascii") input_len = len(input_bytes) padding = b"=" * (3 - ((input_len + 3) % 4)) output_len = 3 * ((input_len + 2) // 4) + (input_len + 2) % 4 - 2 output_bytes = base64.b64decode(input_bytes + padding) return output_bytes[:output_len] def encode_canonical_json(value): return json.dumps( value, # Encode code-points outside of ASCII as UTF-8 rather than \u escapes ensure_ascii=False, # Remove unecessary white space. separators=(',',':'), # Sort the keys of dictionaries. sort_keys=True, # Encode the resulting unicode as UTF-8 bytes. ).encode("UTF-8") def sign_json(json_object, signing_key, signing_name): signatures = json_object.pop("signatures", {}) unsigned = json_object.pop("unsigned", None) signed = signing_key.sign(encode_canonical_json(json_object)) signature_base64 = encode_base64(signed.signature) key_id = "%s:%s" % (signing_key.alg, signing_key.version) signatures.setdefault(signing_name, {})[key_id] = signature_base64 json_object["signatures"] = signatures if unsigned is not None: json_object["unsigned"] = unsigned return json_object NACL_ED25519 = "ed25519" def decode_signing_key_base64(algorithm, version, key_base64): """Decode a base64 encoded signing key Args: algorithm (str): The algorithm the key is for (currently "ed25519"). version (str): Identifies this key out of the keys for this entity. key_base64 (str): Base64 encoded bytes of the key. Returns: A SigningKey object. """ if algorithm == NACL_ED25519: key_bytes = decode_base64(key_base64) key = nacl.signing.SigningKey(key_bytes) key.version = version key.alg = NACL_ED25519 return key else: raise ValueError("Unsupported algorithm %s" % (algorithm,)) def read_signing_keys(stream): """Reads a list of keys from a stream Args: stream : A stream to iterate for keys. Returns: list of SigningKey objects. """ keys = [] for line in stream: algorithm, version, key_base64 = line.split() keys.append(decode_signing_key_base64(algorithm, version, key_base64)) return keys def lookup(destination, path): if ":" in destination: return "https://%s%s" % (destination, path) else: try: srv = srvlookup.lookup("matrix", "tcp", destination)[0] return "https://%s:%d%s" % (srv.host, srv.port, path) except: return "https://%s:%d%s" % (destination, 8448, path) def get_json(origin_name, origin_key, destination, path): request_json = { "method": "GET", "uri": path, "origin": origin_name, "destination": destination, } signed_json = sign_json(request_json, origin_key, origin_name) authorization_headers = [] for key, sig in signed_json["signatures"][origin_name].items(): header = "X-Matrix origin=%s,key=\"%s\",sig=\"%s\"" % ( origin_name, key, sig, ) authorization_headers.append(bytes(header)) print ("Authorization: %s" % header, file=sys.stderr) dest = lookup(destination, path) print ("Requesting %s" % dest, file=sys.stderr) result = requests.get( dest, headers={"Authorization": authorization_headers[0]}, verify=False, ) sys.stderr.write("Status Code: %d\n" % (result.status_code,)) return result.json() def main(): parser = argparse.ArgumentParser( description= "Signs and sends a federation request to a matrix homeserver", ) parser.add_argument( "-N", "--server-name", help="Name to give as the local homeserver. If unspecified, will be " "read from the config file.", ) parser.add_argument( "-k", "--signing-key-path", help="Path to the file containing the private ed25519 key to sign the " "request with.", ) parser.add_argument( "-c", "--config", default="homeserver.yaml", help="Path to server config file. Ignored if --server-name and " "--signing-key-path are both given.", ) parser.add_argument( "-d", "--destination", default="matrix.org", help="name of the remote homeserver. We will do SRV lookups and " "connect appropriately.", ) parser.add_argument( "path", help="request path. We will add '/_matrix/federation/v1/' to this." ) args = parser.parse_args() if not args.server_name or not args.signing_key_path: read_args_from_config(args) with open(args.signing_key_path) as f: key = read_signing_keys(f)[0] result = get_json( args.server_name, key, args.destination, "/_matrix/federation/v1/" + args.path ) json.dump(result, sys.stdout) print ("") def read_args_from_config(args): with open(args.config, 'r') as fh: config = yaml.safe_load(fh) if not args.server_name: args.server_name = config['server_name'] if not args.signing_key_path: args.signing_key_path = config['signing_key_path'] if __name__ == "__main__": main() synapse-0.24.0/scripts-dev/hash_history.py000066400000000000000000000052321317335640100206050ustar00rootroot00000000000000from synapse.storage.pdu import PduStore from synapse.storage.signatures import SignatureStore from synapse.storage._base import SQLBaseStore from synapse.federation.units import Pdu from synapse.crypto.event_signing import ( add_event_pdu_content_hash, compute_pdu_event_reference_hash ) from synapse.api.events.utils import prune_pdu from unpaddedbase64 import encode_base64, decode_base64 from canonicaljson import encode_canonical_json import sqlite3 import sys class Store(object): _get_pdu_tuples = PduStore.__dict__["_get_pdu_tuples"] _get_pdu_content_hashes_txn = SignatureStore.__dict__["_get_pdu_content_hashes_txn"] _get_prev_pdu_hashes_txn = SignatureStore.__dict__["_get_prev_pdu_hashes_txn"] _get_pdu_origin_signatures_txn = SignatureStore.__dict__["_get_pdu_origin_signatures_txn"] _store_pdu_content_hash_txn = SignatureStore.__dict__["_store_pdu_content_hash_txn"] _store_pdu_reference_hash_txn = SignatureStore.__dict__["_store_pdu_reference_hash_txn"] _store_prev_pdu_hash_txn = SignatureStore.__dict__["_store_prev_pdu_hash_txn"] _simple_insert_txn = SQLBaseStore.__dict__["_simple_insert_txn"] store = Store() def select_pdus(cursor): cursor.execute( "SELECT pdu_id, origin FROM pdus ORDER BY depth ASC" ) ids = cursor.fetchall() pdu_tuples = store._get_pdu_tuples(cursor, ids) pdus = [Pdu.from_pdu_tuple(p) for p in pdu_tuples] reference_hashes = {} for pdu in pdus: try: if pdu.prev_pdus: print "PROCESS", pdu.pdu_id, pdu.origin, pdu.prev_pdus for pdu_id, origin, hashes in pdu.prev_pdus: ref_alg, ref_hsh = reference_hashes[(pdu_id, origin)] hashes[ref_alg] = encode_base64(ref_hsh) store._store_prev_pdu_hash_txn(cursor, pdu.pdu_id, pdu.origin, pdu_id, origin, ref_alg, ref_hsh) print "SUCCESS", pdu.pdu_id, pdu.origin, pdu.prev_pdus pdu = add_event_pdu_content_hash(pdu) ref_alg, ref_hsh = compute_pdu_event_reference_hash(pdu) reference_hashes[(pdu.pdu_id, pdu.origin)] = (ref_alg, ref_hsh) store._store_pdu_reference_hash_txn(cursor, pdu.pdu_id, pdu.origin, ref_alg, ref_hsh) for alg, hsh_base64 in pdu.hashes.items(): print alg, hsh_base64 store._store_pdu_content_hash_txn(cursor, pdu.pdu_id, pdu.origin, alg, decode_base64(hsh_base64)) except: print "FAILED_", pdu.pdu_id, pdu.origin, pdu.prev_pdus def main(): conn = sqlite3.connect(sys.argv[1]) cursor = conn.cursor() select_pdus(cursor) conn.commit() if __name__=='__main__': main() synapse-0.24.0/scripts-dev/list_url_patterns.py000077500000000000000000000025111317335640100216560ustar00rootroot00000000000000#! /usr/bin/python import ast import argparse import os import sys import yaml PATTERNS_V1 = [] PATTERNS_V2 = [] RESULT = { "v1": PATTERNS_V1, "v2": PATTERNS_V2, } class CallVisitor(ast.NodeVisitor): def visit_Call(self, node): if isinstance(node.func, ast.Name): name = node.func.id else: return if name == "client_path_patterns": PATTERNS_V1.append(node.args[0].s) elif name == "client_v2_patterns": PATTERNS_V2.append(node.args[0].s) def find_patterns_in_code(input_code): input_ast = ast.parse(input_code) visitor = CallVisitor() visitor.visit(input_ast) def find_patterns_in_file(filepath): with open(filepath) as f: find_patterns_in_code(f.read()) parser = argparse.ArgumentParser(description='Find url patterns.') parser.add_argument( "directories", nargs='+', metavar="DIR", help="Directories to search for definitions" ) args = parser.parse_args() for directory in args.directories: for root, dirs, files in os.walk(directory): for filename in files: if filename.endswith(".py"): filepath = os.path.join(root, filename) find_patterns_in_file(filepath) PATTERNS_V1.sort() PATTERNS_V2.sort() yaml.dump(RESULT, sys.stdout, default_flow_style=False) synapse-0.24.0/scripts-dev/make_identicons.pl000077500000000000000000000027061317335640100212260ustar00rootroot00000000000000#!/usr/bin/env perl use strict; use warnings; use DBI; use DBD::SQLite; use JSON; use Getopt::Long; my $db; # = "homeserver.db"; my $server = "http://localhost:8008"; my $size = 320; GetOptions("db|d=s", \$db, "server|s=s", \$server, "width|w=i", \$size) or usage(); usage() unless $db; my $dbh = DBI->connect("dbi:SQLite:dbname=$db","","") || die $DBI::errstr; my $res = $dbh->selectall_arrayref("select token, name from access_tokens, users where access_tokens.user_id = users.id group by user_id") || die $DBI::errstr; foreach (@$res) { my ($token, $mxid) = ($_->[0], $_->[1]); my ($user_id) = ($mxid =~ m/@(.*):/); my ($url) = $dbh->selectrow_array("select avatar_url from profiles where user_id=?", undef, $user_id); if (!$url || $url =~ /#auto$/) { `curl -s -o tmp.png "$server/_matrix/media/v1/identicon?name=${mxid}&width=$size&height=$size"`; my $json = `curl -s -X POST -H "Content-Type: image/png" -T "tmp.png" $server/_matrix/media/v1/upload?access_token=$token`; my $content_uri = from_json($json)->{content_uri}; `curl -X PUT -H "Content-Type: application/json" --data '{ "avatar_url": "${content_uri}#auto"}' $server/_matrix/client/api/v1/profile/${mxid}/avatar_url?access_token=$token`; } } sub usage { die "usage: ./make-identicons.pl\n\t-d database [e.g. homeserver.db]\n\t-s homeserver (default: http://localhost:8008)\n\t-w identicon size in pixels (default 320)"; }synapse-0.24.0/scripts-dev/nuke-room-from-db.sh000077500000000000000000000040301317335640100213210ustar00rootroot00000000000000#!/bin/bash ## CAUTION: ## This script will remove (hopefully) all trace of the given room ID from ## your homeserver.db ## Do not run it lightly. ROOMID="$1" sqlite3 homeserver.db <= (2, 7, 9): # As of version 2.7.9, urllib2 now checks SSL certs import ssl f = urllib2.urlopen(req, context=ssl.SSLContext(ssl.PROTOCOL_SSLv23)) else: f = urllib2.urlopen(req) f.read() f.close() print "Success." except urllib2.HTTPError as e: print "ERROR! Received %d %s" % (e.code, e.reason,) if 400 <= e.code < 500: if e.info().type == "application/json": resp = json.load(e) if "error" in resp: print resp["error"] sys.exit(1) def register_new_user(user, password, server_location, shared_secret, admin): if not user: try: default_user = getpass.getuser() except: default_user = None if default_user: user = raw_input("New user localpart [%s]: " % (default_user,)) if not user: user = default_user else: user = raw_input("New user localpart: ") if not user: print "Invalid user name" sys.exit(1) if not password: password = getpass.getpass("Password: ") if not password: print "Password cannot be blank." sys.exit(1) confirm_password = getpass.getpass("Confirm password: ") if password != confirm_password: print "Passwords do not match" sys.exit(1) if not admin: admin = raw_input("Make admin [no]: ") if admin in ("y", "yes", "true"): admin = True else: admin = False request_registration(user, password, server_location, shared_secret, bool(admin)) if __name__ == "__main__": parser = argparse.ArgumentParser( description="Used to register new users with a given home server when" " registration has been disabled. The home server must be" " configured with the 'registration_shared_secret' option" " set.", ) parser.add_argument( "-u", "--user", default=None, help="Local part of the new user. Will prompt if omitted.", ) parser.add_argument( "-p", "--password", default=None, help="New password for user. Will prompt if omitted.", ) parser.add_argument( "-a", "--admin", action="store_true", help="Register new user as an admin. Will prompt if omitted.", ) group = parser.add_mutually_exclusive_group(required=True) group.add_argument( "-c", "--config", type=argparse.FileType('r'), help="Path to server config file. Used to read in shared secret.", ) group.add_argument( "-k", "--shared-secret", help="Shared secret as defined in server config file.", ) parser.add_argument( "server_url", default="https://localhost:8448", nargs='?', help="URL to use to talk to the home server. Defaults to " " 'https://localhost:8448'.", ) args = parser.parse_args() if "config" in args and args.config: config = yaml.safe_load(args.config) secret = config.get("registration_shared_secret", None) if not secret: print "No 'registration_shared_secret' defined in config." sys.exit(1) else: secret = args.shared_secret register_new_user(args.user, args.password, args.server_url, secret, args.admin) synapse-0.24.0/scripts/synapse_port_db000077500000000000000000000724601317335640100201030ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer, reactor from twisted.enterprise import adbapi from synapse.storage._base import LoggingTransaction, SQLBaseStore from synapse.storage.engines import create_engine from synapse.storage.prepare_database import prepare_database import argparse import curses import logging import sys import time import traceback import yaml logger = logging.getLogger("synapse_port_db") BOOLEAN_COLUMNS = { "events": ["processed", "outlier", "contains_url"], "rooms": ["is_public"], "event_edges": ["is_state"], "presence_list": ["accepted"], "presence_stream": ["currently_active"], "public_room_list_stream": ["visibility"], "device_lists_outbound_pokes": ["sent"], "users_who_share_rooms": ["share_private"], } APPEND_ONLY_TABLES = [ "event_content_hashes", "event_reference_hashes", "event_signatures", "event_edge_hashes", "events", "event_json", "state_events", "room_memberships", "feedback", "topics", "room_names", "rooms", "local_media_repository", "local_media_repository_thumbnails", "remote_media_cache", "remote_media_cache_thumbnails", "redactions", "event_edges", "event_auth", "received_transactions", "sent_transactions", "transaction_id_to_pdu", "users", "state_groups", "state_groups_state", "event_to_state_groups", "rejections", "event_search", "presence_stream", "push_rules_stream", "current_state_resets", "ex_outlier_stream", "cache_invalidation_stream", "public_room_list_stream", "state_group_edges", "stream_ordering_to_exterm", ] end_error_exec_info = None class Store(object): """This object is used to pull out some of the convenience API from the Storage layer. *All* database interactions should go through this object. """ def __init__(self, db_pool, engine): self.db_pool = db_pool self.database_engine = engine _simple_insert_txn = SQLBaseStore.__dict__["_simple_insert_txn"] _simple_insert = SQLBaseStore.__dict__["_simple_insert"] _simple_select_onecol_txn = SQLBaseStore.__dict__["_simple_select_onecol_txn"] _simple_select_onecol = SQLBaseStore.__dict__["_simple_select_onecol"] _simple_select_one = SQLBaseStore.__dict__["_simple_select_one"] _simple_select_one_txn = SQLBaseStore.__dict__["_simple_select_one_txn"] _simple_select_one_onecol = SQLBaseStore.__dict__["_simple_select_one_onecol"] _simple_select_one_onecol_txn = SQLBaseStore.__dict__[ "_simple_select_one_onecol_txn" ] _simple_update_one = SQLBaseStore.__dict__["_simple_update_one"] _simple_update_one_txn = SQLBaseStore.__dict__["_simple_update_one_txn"] def runInteraction(self, desc, func, *args, **kwargs): def r(conn): try: i = 0 N = 5 while True: try: txn = conn.cursor() return func( LoggingTransaction(txn, desc, self.database_engine, [], []), *args, **kwargs ) except self.database_engine.module.DatabaseError as e: if self.database_engine.is_deadlock(e): logger.warn("[TXN DEADLOCK] {%s} %d/%d", desc, i, N) if i < N: i += 1 conn.rollback() continue raise except Exception as e: logger.debug("[TXN FAIL] {%s} %s", desc, e) raise return self.db_pool.runWithConnection(r) def execute(self, f, *args, **kwargs): return self.runInteraction(f.__name__, f, *args, **kwargs) def execute_sql(self, sql, *args): def r(txn): txn.execute(sql, args) return txn.fetchall() return self.runInteraction("execute_sql", r) def insert_many_txn(self, txn, table, headers, rows): sql = "INSERT INTO %s (%s) VALUES (%s)" % ( table, ", ".join(k for k in headers), ", ".join("%s" for _ in headers) ) try: txn.executemany(sql, rows) except: logger.exception( "Failed to insert: %s", table, ) raise class Porter(object): def __init__(self, **kwargs): self.__dict__.update(kwargs) @defer.inlineCallbacks def setup_table(self, table): if table in APPEND_ONLY_TABLES: # It's safe to just carry on inserting. row = yield self.postgres_store._simple_select_one( table="port_from_sqlite3", keyvalues={"table_name": table}, retcols=("forward_rowid", "backward_rowid"), allow_none=True, ) total_to_port = None if row is None: if table == "sent_transactions": forward_chunk, already_ported, total_to_port = ( yield self._setup_sent_transactions() ) backward_chunk = 0 else: yield self.postgres_store._simple_insert( table="port_from_sqlite3", values={ "table_name": table, "forward_rowid": 1, "backward_rowid": 0, } ) forward_chunk = 1 backward_chunk = 0 already_ported = 0 else: forward_chunk = row["forward_rowid"] backward_chunk = row["backward_rowid"] if total_to_port is None: already_ported, total_to_port = yield self._get_total_count_to_port( table, forward_chunk, backward_chunk ) else: def delete_all(txn): txn.execute( "DELETE FROM port_from_sqlite3 WHERE table_name = %s", (table,) ) txn.execute("TRUNCATE %s CASCADE" % (table,)) yield self.postgres_store.execute(delete_all) yield self.postgres_store._simple_insert( table="port_from_sqlite3", values={ "table_name": table, "forward_rowid": 1, "backward_rowid": 0, } ) forward_chunk = 1 backward_chunk = 0 already_ported, total_to_port = yield self._get_total_count_to_port( table, forward_chunk, backward_chunk ) defer.returnValue( (table, already_ported, total_to_port, forward_chunk, backward_chunk) ) @defer.inlineCallbacks def handle_table(self, table, postgres_size, table_size, forward_chunk, backward_chunk): if not table_size: return self.progress.add_table(table, postgres_size, table_size) if table == "event_search": yield self.handle_search_table( postgres_size, table_size, forward_chunk, backward_chunk ) return if table in ( "user_directory", "user_directory_search", "users_who_share_rooms", "users_in_pubic_room", ): # We don't port these tables, as they're a faff and we can regenreate # them anyway. self.progress.update(table, table_size) # Mark table as done return if table == "user_directory_stream_pos": # We need to make sure there is a single row, `(X, null), as that is # what synapse expects to be there. yield self.postgres_store._simple_insert( table=table, values={"stream_id": None}, ) self.progress.update(table, table_size) # Mark table as done return forward_select = ( "SELECT rowid, * FROM %s WHERE rowid >= ? ORDER BY rowid LIMIT ?" % (table,) ) backward_select = ( "SELECT rowid, * FROM %s WHERE rowid <= ? ORDER BY rowid LIMIT ?" % (table,) ) do_forward = [True] do_backward = [True] while True: def r(txn): forward_rows = [] backward_rows = [] if do_forward[0]: txn.execute(forward_select, (forward_chunk, self.batch_size,)) forward_rows = txn.fetchall() if not forward_rows: do_forward[0] = False if do_backward[0]: txn.execute(backward_select, (backward_chunk, self.batch_size,)) backward_rows = txn.fetchall() if not backward_rows: do_backward[0] = False if forward_rows or backward_rows: headers = [column[0] for column in txn.description] else: headers = None return headers, forward_rows, backward_rows headers, frows, brows = yield self.sqlite_store.runInteraction( "select", r ) if frows or brows: if frows: forward_chunk = max(row[0] for row in frows) + 1 if brows: backward_chunk = min(row[0] for row in brows) - 1 rows = frows + brows self._convert_rows(table, headers, rows) def insert(txn): self.postgres_store.insert_many_txn( txn, table, headers[1:], rows ) self.postgres_store._simple_update_one_txn( txn, table="port_from_sqlite3", keyvalues={"table_name": table}, updatevalues={ "forward_rowid": forward_chunk, "backward_rowid": backward_chunk, }, ) yield self.postgres_store.execute(insert) postgres_size += len(rows) self.progress.update(table, postgres_size) else: return @defer.inlineCallbacks def handle_search_table(self, postgres_size, table_size, forward_chunk, backward_chunk): select = ( "SELECT es.rowid, es.*, e.origin_server_ts, e.stream_ordering" " FROM event_search as es" " INNER JOIN events AS e USING (event_id, room_id)" " WHERE es.rowid >= ?" " ORDER BY es.rowid LIMIT ?" ) while True: def r(txn): txn.execute(select, (forward_chunk, self.batch_size,)) rows = txn.fetchall() headers = [column[0] for column in txn.description] return headers, rows headers, rows = yield self.sqlite_store.runInteraction("select", r) if rows: forward_chunk = rows[-1][0] + 1 # We have to treat event_search differently since it has a # different structure in the two different databases. def insert(txn): sql = ( "INSERT INTO event_search (event_id, room_id, key," " sender, vector, origin_server_ts, stream_ordering)" " VALUES (?,?,?,?,to_tsvector('english', ?),?,?)" ) rows_dict = [] for row in rows: d = dict(zip(headers, row)) if "\0" in d['value']: logger.warn('dropping search row %s', d) else: rows_dict.append(d) txn.executemany(sql, [ ( row["event_id"], row["room_id"], row["key"], row["sender"], row["value"], row["origin_server_ts"], row["stream_ordering"], ) for row in rows_dict ]) self.postgres_store._simple_update_one_txn( txn, table="port_from_sqlite3", keyvalues={"table_name": "event_search"}, updatevalues={ "forward_rowid": forward_chunk, "backward_rowid": backward_chunk, }, ) yield self.postgres_store.execute(insert) postgres_size += len(rows) self.progress.update("event_search", postgres_size) else: return def setup_db(self, db_config, database_engine): db_conn = database_engine.module.connect( **{ k: v for k, v in db_config.get("args", {}).items() if not k.startswith("cp_") } ) prepare_database(db_conn, database_engine, config=None) db_conn.commit() @defer.inlineCallbacks def run(self): try: sqlite_db_pool = adbapi.ConnectionPool( self.sqlite_config["name"], **self.sqlite_config["args"] ) postgres_db_pool = adbapi.ConnectionPool( self.postgres_config["name"], **self.postgres_config["args"] ) sqlite_engine = create_engine(sqlite_config) postgres_engine = create_engine(postgres_config) self.sqlite_store = Store(sqlite_db_pool, sqlite_engine) self.postgres_store = Store(postgres_db_pool, postgres_engine) yield self.postgres_store.execute( postgres_engine.check_database ) # Step 1. Set up databases. self.progress.set_state("Preparing SQLite3") self.setup_db(sqlite_config, sqlite_engine) self.progress.set_state("Preparing PostgreSQL") self.setup_db(postgres_config, postgres_engine) # Step 2. Get tables. self.progress.set_state("Fetching tables") sqlite_tables = yield self.sqlite_store._simple_select_onecol( table="sqlite_master", keyvalues={ "type": "table", }, retcol="name", ) postgres_tables = yield self.postgres_store._simple_select_onecol( table="information_schema.tables", keyvalues={}, retcol="distinct table_name", ) tables = set(sqlite_tables) & set(postgres_tables) self.progress.set_state("Creating tables") logger.info("Found %d tables", len(tables)) def create_port_table(txn): txn.execute( "CREATE TABLE port_from_sqlite3 (" " table_name varchar(100) NOT NULL UNIQUE," " forward_rowid bigint NOT NULL," " backward_rowid bigint NOT NULL" ")" ) # The old port script created a table with just a "rowid" column. # We want people to be able to rerun this script from an old port # so that they can pick up any missing events that were not # ported across. def alter_table(txn): txn.execute( "ALTER TABLE IF EXISTS port_from_sqlite3" " RENAME rowid TO forward_rowid" ) txn.execute( "ALTER TABLE IF EXISTS port_from_sqlite3" " ADD backward_rowid bigint NOT NULL DEFAULT 0" ) try: yield self.postgres_store.runInteraction( "alter_table", alter_table ) except Exception as e: logger.info("Failed to create port table: %s", e) try: yield self.postgres_store.runInteraction( "create_port_table", create_port_table ) except Exception as e: logger.info("Failed to create port table: %s", e) self.progress.set_state("Setting up") # Set up tables. setup_res = yield defer.gatherResults( [ self.setup_table(table) for table in tables if table not in ["schema_version", "applied_schema_deltas"] and not table.startswith("sqlite_") ], consumeErrors=True, ) # Process tables. yield defer.gatherResults( [ self.handle_table(*res) for res in setup_res ], consumeErrors=True, ) self.progress.done() except: global end_error_exec_info end_error_exec_info = sys.exc_info() logger.exception("") finally: reactor.stop() def _convert_rows(self, table, headers, rows): bool_col_names = BOOLEAN_COLUMNS.get(table, []) bool_cols = [ i for i, h in enumerate(headers) if h in bool_col_names ] def conv(j, col): if j in bool_cols: return bool(col) return col for i, row in enumerate(rows): rows[i] = tuple( conv(j, col) for j, col in enumerate(row) if j > 0 ) @defer.inlineCallbacks def _setup_sent_transactions(self): # Only save things from the last day yesterday = int(time.time() * 1000) - 86400000 # And save the max transaction id from each destination select = ( "SELECT rowid, * FROM sent_transactions WHERE rowid IN (" "SELECT max(rowid) FROM sent_transactions" " GROUP BY destination" ")" ) def r(txn): txn.execute(select) rows = txn.fetchall() headers = [column[0] for column in txn.description] ts_ind = headers.index('ts') return headers, [r for r in rows if r[ts_ind] < yesterday] headers, rows = yield self.sqlite_store.runInteraction( "select", r, ) self._convert_rows("sent_transactions", headers, rows) inserted_rows = len(rows) if inserted_rows: max_inserted_rowid = max(r[0] for r in rows) def insert(txn): self.postgres_store.insert_many_txn( txn, "sent_transactions", headers[1:], rows ) yield self.postgres_store.execute(insert) else: max_inserted_rowid = 0 def get_start_id(txn): txn.execute( "SELECT rowid FROM sent_transactions WHERE ts >= ?" " ORDER BY rowid ASC LIMIT 1", (yesterday,) ) rows = txn.fetchall() if rows: return rows[0][0] else: return 1 next_chunk = yield self.sqlite_store.execute(get_start_id) next_chunk = max(max_inserted_rowid + 1, next_chunk) yield self.postgres_store._simple_insert( table="port_from_sqlite3", values={ "table_name": "sent_transactions", "forward_rowid": next_chunk, "backward_rowid": 0, } ) def get_sent_table_size(txn): txn.execute( "SELECT count(*) FROM sent_transactions" " WHERE ts >= ?", (yesterday,) ) size, = txn.fetchone() return int(size) remaining_count = yield self.sqlite_store.execute( get_sent_table_size ) total_count = remaining_count + inserted_rows defer.returnValue((next_chunk, inserted_rows, total_count)) @defer.inlineCallbacks def _get_remaining_count_to_port(self, table, forward_chunk, backward_chunk): frows = yield self.sqlite_store.execute_sql( "SELECT count(*) FROM %s WHERE rowid >= ?" % (table,), forward_chunk, ) brows = yield self.sqlite_store.execute_sql( "SELECT count(*) FROM %s WHERE rowid <= ?" % (table,), backward_chunk, ) defer.returnValue(frows[0][0] + brows[0][0]) @defer.inlineCallbacks def _get_already_ported_count(self, table): rows = yield self.postgres_store.execute_sql( "SELECT count(*) FROM %s" % (table,), ) defer.returnValue(rows[0][0]) @defer.inlineCallbacks def _get_total_count_to_port(self, table, forward_chunk, backward_chunk): remaining, done = yield defer.gatherResults( [ self._get_remaining_count_to_port(table, forward_chunk, backward_chunk), self._get_already_ported_count(table), ], consumeErrors=True, ) remaining = int(remaining) if remaining else 0 done = int(done) if done else 0 defer.returnValue((done, remaining + done)) ############################################## ###### The following is simply UI stuff ###### ############################################## class Progress(object): """Used to report progress of the port """ def __init__(self): self.tables = {} self.start_time = int(time.time()) def add_table(self, table, cur, size): self.tables[table] = { "start": cur, "num_done": cur, "total": size, "perc": int(cur * 100 / size), } def update(self, table, num_done): data = self.tables[table] data["num_done"] = num_done data["perc"] = int(num_done * 100 / data["total"]) def done(self): pass class CursesProgress(Progress): """Reports progress to a curses window """ def __init__(self, stdscr): self.stdscr = stdscr curses.use_default_colors() curses.curs_set(0) curses.init_pair(1, curses.COLOR_RED, -1) curses.init_pair(2, curses.COLOR_GREEN, -1) self.last_update = 0 self.finished = False self.total_processed = 0 self.total_remaining = 0 super(CursesProgress, self).__init__() def update(self, table, num_done): super(CursesProgress, self).update(table, num_done) self.total_processed = 0 self.total_remaining = 0 for table, data in self.tables.items(): self.total_processed += data["num_done"] - data["start"] self.total_remaining += data["total"] - data["num_done"] self.render() def render(self, force=False): now = time.time() if not force and now - self.last_update < 0.2: # reactor.callLater(1, self.render) return self.stdscr.clear() rows, cols = self.stdscr.getmaxyx() duration = int(now) - int(self.start_time) minutes, seconds = divmod(duration, 60) duration_str = '%02dm %02ds' % (minutes, seconds,) if self.finished: status = "Time spent: %s (Done!)" % (duration_str,) else: if self.total_processed > 0: left = float(self.total_remaining) / self.total_processed est_remaining = (int(now) - self.start_time) * left est_remaining_str = '%02dm %02ds remaining' % divmod(est_remaining, 60) else: est_remaining_str = "Unknown" status = ( "Time spent: %s (est. remaining: %s)" % (duration_str, est_remaining_str,) ) self.stdscr.addstr( 0, 0, status, curses.A_BOLD, ) max_len = max([len(t) for t in self.tables.keys()]) left_margin = 5 middle_space = 1 items = self.tables.items() items.sort( key=lambda i: (i[1]["perc"], i[0]), ) for i, (table, data) in enumerate(items): if i + 2 >= rows: break perc = data["perc"] color = curses.color_pair(2) if perc == 100 else curses.color_pair(1) self.stdscr.addstr( i + 2, left_margin + max_len - len(table), table, curses.A_BOLD | color, ) size = 20 progress = "[%s%s]" % ( "#" * int(perc * size / 100), " " * (size - int(perc * size / 100)), ) self.stdscr.addstr( i + 2, left_margin + max_len + middle_space, "%s %3d%% (%d/%d)" % (progress, perc, data["num_done"], data["total"]), ) if self.finished: self.stdscr.addstr( rows - 1, 0, "Press any key to exit...", ) self.stdscr.refresh() self.last_update = time.time() def done(self): self.finished = True self.render(True) self.stdscr.getch() def set_state(self, state): self.stdscr.clear() self.stdscr.addstr( 0, 0, state + "...", curses.A_BOLD, ) self.stdscr.refresh() class TerminalProgress(Progress): """Just prints progress to the terminal """ def update(self, table, num_done): super(TerminalProgress, self).update(table, num_done) data = self.tables[table] print "%s: %d%% (%d/%d)" % ( table, data["perc"], data["num_done"], data["total"], ) def set_state(self, state): print state + "..." ############################################## ############################################## if __name__ == "__main__": parser = argparse.ArgumentParser( description="A script to port an existing synapse SQLite database to" " a new PostgreSQL database." ) parser.add_argument("-v", action='store_true') parser.add_argument( "--sqlite-database", required=True, help="The snapshot of the SQLite database file. This must not be" " currently used by a running synapse server" ) parser.add_argument( "--postgres-config", type=argparse.FileType('r'), required=True, help="The database config file for the PostgreSQL database" ) parser.add_argument( "--curses", action='store_true', help="display a curses based progress UI" ) parser.add_argument( "--batch-size", type=int, default=1000, help="The number of rows to select from the SQLite table each" " iteration [default=1000]", ) args = parser.parse_args() logging_config = { "level": logging.DEBUG if args.v else logging.INFO, "format": "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(message)s" } if args.curses: logging_config["filename"] = "port-synapse.log" logging.basicConfig(**logging_config) sqlite_config = { "name": "sqlite3", "args": { "database": args.sqlite_database, "cp_min": 1, "cp_max": 1, "check_same_thread": False, }, } postgres_config = yaml.safe_load(args.postgres_config) if "database" in postgres_config: postgres_config = postgres_config["database"] if "name" not in postgres_config: sys.stderr.write("Malformed database config: no 'name'") sys.exit(2) if postgres_config["name"] != "psycopg2": sys.stderr.write("Database must use 'psycopg2' connector.") sys.exit(3) def start(stdscr=None): if stdscr: progress = CursesProgress(stdscr) else: progress = TerminalProgress() porter = Porter( sqlite_config=sqlite_config, postgres_config=postgres_config, progress=progress, batch_size=args.batch_size, ) reactor.callWhenRunning(porter.run) reactor.run() if args.curses: curses.wrapper(start) else: start() if end_error_exec_info: exc_type, exc_value, exc_traceback = end_error_exec_info traceback.print_exception(exc_type, exc_value, exc_traceback) synapse-0.24.0/setup.cfg000066400000000000000000000005241317335640100151040ustar00rootroot00000000000000[build_sphinx] source-dir = docs/sphinx build-dir = docs/build all_files = 1 [trial] test_suite = tests [check-manifest] ignore = contrib contrib/* docs/* pylint.cfg tox.ini [flake8] max-line-length = 90 # W503 requires that binary operators be at the end, not start, of lines. Erik doesn't like it. ignore = W503 synapse-0.24.0/setup.py000077500000000000000000000065301317335640100150030ustar00rootroot00000000000000#!/usr/bin/env python # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import glob import os from setuptools import setup, find_packages, Command import sys here = os.path.abspath(os.path.dirname(__file__)) # Some notes on `setup.py test`: # # Once upon a time we used to try to make `setup.py test` run `tox` to run the # tests. That's a bad idea for three reasons: # # 1: `setup.py test` is supposed to find out whether the tests work in the # *current* environmentt, not whatever tox sets up. # 2: Empirically, trying to install tox during the test run wasn't working ("No # module named virtualenv"). # 3: The tox documentation advises against it[1]. # # Even further back in time, we used to use setuptools_trial [2]. That has its # own set of issues: for instance, it requires installation of Twisted to build # an sdist (because the recommended mode of usage is to add it to # `setup_requires`). That in turn means that in order to successfully run tox # you have to have the python header files installed for whichever version of # python tox uses (which is python3 on recent ubuntus, for example). # # So, for now at least, we stick with what appears to be the convention among # Twisted projects, and don't attempt to do anything when someone runs # `setup.py test`; instead we direct people to run `trial` directly if they # care. # # [1]: http://tox.readthedocs.io/en/2.5.0/example/basic.html#integration-with-setup-py-test-command # [2]: https://pypi.python.org/pypi/setuptools_trial class TestCommand(Command): user_options = [] def initialize_options(self): pass def finalize_options(self): pass def run(self): print ("""Synapse's tests cannot be run via setup.py. To run them, try: PYTHONPATH="." trial tests """) def read_file(path_segments): """Read a file from the package. Takes a list of strings to join to make the path""" file_path = os.path.join(here, *path_segments) with open(file_path) as f: return f.read() def exec_file(path_segments): """Execute a single python file to get the variables defined in it""" result = {} code = read_file(path_segments) exec(code, result) return result version = exec_file(("synapse", "__init__.py"))["__version__"] dependencies = exec_file(("synapse", "python_dependencies.py")) long_description = read_file(("README.rst",)) setup( name="matrix-synapse", version=version, packages=find_packages(exclude=["tests", "tests.*"]), description="Reference Synapse Home Server", install_requires=dependencies['requirements'](include_conditional=True).keys(), dependency_links=dependencies["DEPENDENCY_LINKS"].values(), include_package_data=True, zip_safe=False, long_description=long_description, scripts=["synctl"] + glob.glob("scripts/*"), cmdclass={'test': TestCommand}, ) synapse-0.24.0/synapse/000077500000000000000000000000001317335640100147445ustar00rootroot00000000000000synapse-0.24.0/synapse/__init__.py000066400000000000000000000012741317335640100170610ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This is a reference implementation of a Matrix home server. """ __version__ = "0.24.0" synapse-0.24.0/synapse/api/000077500000000000000000000000001317335640100155155ustar00rootroot00000000000000synapse-0.24.0/synapse/api/__init__.py000066400000000000000000000011371317335640100176300ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. synapse-0.24.0/synapse/api/auth.py000066400000000000000000000657051317335640100170450ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import pymacaroons from twisted.internet import defer import synapse.types from synapse import event_auth from synapse.api.constants import EventTypes, Membership, JoinRules from synapse.api.errors import AuthError, Codes from synapse.types import UserID from synapse.util.caches import register_cache, CACHE_SIZE_FACTOR from synapse.util.caches.lrucache import LruCache from synapse.util.metrics import Measure logger = logging.getLogger(__name__) AuthEventTypes = ( EventTypes.Create, EventTypes.Member, EventTypes.PowerLevels, EventTypes.JoinRules, EventTypes.RoomHistoryVisibility, EventTypes.ThirdPartyInvite, ) # guests always get this device id. GUEST_DEVICE_ID = "guest_device" class _InvalidMacaroonException(Exception): pass class Auth(object): """ FIXME: This class contains a mix of functions for authenticating users of our client-server API and authenticating events added to room graphs. """ def __init__(self, hs): self.hs = hs self.clock = hs.get_clock() self.store = hs.get_datastore() self.state = hs.get_state_handler() self.TOKEN_NOT_FOUND_HTTP_STATUS = 401 self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000) register_cache("token_cache", self.token_cache) @defer.inlineCallbacks def check_from_context(self, event, context, do_sig_check=True): auth_events_ids = yield self.compute_auth_events( event, context.prev_state_ids, for_verification=True, ) auth_events = yield self.store.get_events(auth_events_ids) auth_events = { (e.type, e.state_key): e for e in auth_events.values() } self.check(event, auth_events=auth_events, do_sig_check=do_sig_check) def check(self, event, auth_events, do_sig_check=True): """ Checks if this event is correctly authed. Args: event: the event being checked. auth_events (dict: event-key -> event): the existing room state. Returns: True if the auth checks pass. """ with Measure(self.clock, "auth.check"): event_auth.check(event, auth_events, do_sig_check=do_sig_check) @defer.inlineCallbacks def check_joined_room(self, room_id, user_id, current_state=None): """Check if the user is currently joined in the room Args: room_id(str): The room to check. user_id(str): The user to check. current_state(dict): Optional map of the current state of the room. If provided then that map is used to check whether they are a member of the room. Otherwise the current membership is loaded from the database. Raises: AuthError if the user is not in the room. Returns: A deferred membership event for the user if the user is in the room. """ if current_state: member = current_state.get( (EventTypes.Member, user_id), None ) else: member = yield self.state.get_current_state( room_id=room_id, event_type=EventTypes.Member, state_key=user_id ) self._check_joined_room(member, user_id, room_id) defer.returnValue(member) @defer.inlineCallbacks def check_user_was_in_room(self, room_id, user_id): """Check if the user was in the room at some point. Args: room_id(str): The room to check. user_id(str): The user to check. Raises: AuthError if the user was never in the room. Returns: A deferred membership event for the user if the user was in the room. This will be the join event if they are currently joined to the room. This will be the leave event if they have left the room. """ member = yield self.state.get_current_state( room_id=room_id, event_type=EventTypes.Member, state_key=user_id ) membership = member.membership if member else None if membership not in (Membership.JOIN, Membership.LEAVE): raise AuthError(403, "User %s not in room %s" % ( user_id, room_id )) if membership == Membership.LEAVE: forgot = yield self.store.did_forget(user_id, room_id) if forgot: raise AuthError(403, "User %s not in room %s" % ( user_id, room_id )) defer.returnValue(member) @defer.inlineCallbacks def check_host_in_room(self, room_id, host): with Measure(self.clock, "check_host_in_room"): latest_event_ids = yield self.store.is_host_joined(room_id, host) defer.returnValue(latest_event_ids) def _check_joined_room(self, member, user_id, room_id): if not member or member.membership != Membership.JOIN: raise AuthError(403, "User %s not in room %s (%s)" % ( user_id, room_id, repr(member) )) def can_federate(self, event, auth_events): creation_event = auth_events.get((EventTypes.Create, "")) return creation_event.content.get("m.federate", True) is True def get_public_keys(self, invite_event): return event_auth.get_public_keys(invite_event) @defer.inlineCallbacks def get_user_by_req(self, request, allow_guest=False, rights="access"): """ Get a registered user's ID. Args: request - An HTTP request with an access_token query parameter. Returns: defer.Deferred: resolves to a ``synapse.types.Requester`` object Raises: AuthError if no user by that token exists or the token is invalid. """ # Can optionally look elsewhere in the request (e.g. headers) try: user_id, app_service = yield self._get_appservice_user_id(request) if user_id: request.authenticated_entity = user_id defer.returnValue( synapse.types.create_requester(user_id, app_service=app_service) ) access_token = get_access_token_from_request( request, self.TOKEN_NOT_FOUND_HTTP_STATUS ) user_info = yield self.get_user_by_access_token(access_token, rights) user = user_info["user"] token_id = user_info["token_id"] is_guest = user_info["is_guest"] # device_id may not be present if get_user_by_access_token has been # stubbed out. device_id = user_info.get("device_id") ip_addr = self.hs.get_ip_from_request(request) user_agent = request.requestHeaders.getRawHeaders( "User-Agent", default=[""] )[0] if user and access_token and ip_addr: self.store.insert_client_ip( user_id=user.to_string(), access_token=access_token, ip=ip_addr, user_agent=user_agent, device_id=device_id, ) if is_guest and not allow_guest: raise AuthError( 403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN ) request.authenticated_entity = user.to_string() defer.returnValue(synapse.types.create_requester( user, token_id, is_guest, device_id, app_service=app_service) ) except KeyError: raise AuthError( self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token.", errcode=Codes.MISSING_TOKEN ) @defer.inlineCallbacks def _get_appservice_user_id(self, request): app_service = self.store.get_app_service_by_token( get_access_token_from_request( request, self.TOKEN_NOT_FOUND_HTTP_STATUS ) ) if app_service is None: defer.returnValue((None, None)) if "user_id" not in request.args: defer.returnValue((app_service.sender, app_service)) user_id = request.args["user_id"][0] if app_service.sender == user_id: defer.returnValue((app_service.sender, app_service)) if not app_service.is_interested_in_user(user_id): raise AuthError( 403, "Application service cannot masquerade as this user." ) if not (yield self.store.get_user_by_id(user_id)): raise AuthError( 403, "Application service has not registered this user" ) defer.returnValue((user_id, app_service)) @defer.inlineCallbacks def get_user_by_access_token(self, token, rights="access"): """ Validate access token and get user_id from it Args: token (str): The access token to get the user by. rights (str): The operation being performed; the access token must allow this. Returns: dict : dict that includes the user and the ID of their access token. Raises: AuthError if no user by that token exists or the token is invalid. """ try: user_id, guest = self._parse_and_validate_macaroon(token, rights) except _InvalidMacaroonException: # doesn't look like a macaroon: treat it as an opaque token which # must be in the database. # TODO: it would be nice to get rid of this, but apparently some # people use access tokens which aren't macaroons r = yield self._look_up_user_by_access_token(token) defer.returnValue(r) try: user = UserID.from_string(user_id) if guest: # Guest access tokens are not stored in the database (there can # only be one access token per guest, anyway). # # In order to prevent guest access tokens being used as regular # user access tokens (and hence getting around the invalidation # process), we look up the user id and check that it is indeed # a guest user. # # It would of course be much easier to store guest access # tokens in the database as well, but that would break existing # guest tokens. stored_user = yield self.store.get_user_by_id(user_id) if not stored_user: raise AuthError( self.TOKEN_NOT_FOUND_HTTP_STATUS, "Unknown user_id %s" % user_id, errcode=Codes.UNKNOWN_TOKEN ) if not stored_user["is_guest"]: raise AuthError( self.TOKEN_NOT_FOUND_HTTP_STATUS, "Guest access token used for regular user", errcode=Codes.UNKNOWN_TOKEN ) ret = { "user": user, "is_guest": True, "token_id": None, # all guests get the same device id "device_id": GUEST_DEVICE_ID, } elif rights == "delete_pusher": # We don't store these tokens in the database ret = { "user": user, "is_guest": False, "token_id": None, "device_id": None, } else: # This codepath exists for several reasons: # * so that we can actually return a token ID, which is used # in some parts of the schema (where we probably ought to # use device IDs instead) # * the only way we currently have to invalidate an # access_token is by removing it from the database, so we # have to check here that it is still in the db # * some attributes (notably device_id) aren't stored in the # macaroon. They probably should be. # TODO: build the dictionary from the macaroon once the # above are fixed ret = yield self._look_up_user_by_access_token(token) if ret["user"] != user: logger.error( "Macaroon user (%s) != DB user (%s)", user, ret["user"] ) raise AuthError( self.TOKEN_NOT_FOUND_HTTP_STATUS, "User mismatch in macaroon", errcode=Codes.UNKNOWN_TOKEN ) defer.returnValue(ret) except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError): raise AuthError( self.TOKEN_NOT_FOUND_HTTP_STATUS, "Invalid macaroon passed.", errcode=Codes.UNKNOWN_TOKEN ) def _parse_and_validate_macaroon(self, token, rights="access"): """Takes a macaroon and tries to parse and validate it. This is cached if and only if rights == access and there isn't an expiry. On invalid macaroon raises _InvalidMacaroonException Returns: (user_id, is_guest) """ if rights == "access": cached = self.token_cache.get(token, None) if cached: return cached try: macaroon = pymacaroons.Macaroon.deserialize(token) except Exception: # deserialize can throw more-or-less anything # doesn't look like a macaroon: treat it as an opaque token which # must be in the database. # TODO: it would be nice to get rid of this, but apparently some # people use access tokens which aren't macaroons raise _InvalidMacaroonException() try: user_id = self.get_user_id_from_macaroon(macaroon) has_expiry = False guest = False for caveat in macaroon.caveats: if caveat.caveat_id.startswith("time "): has_expiry = True elif caveat.caveat_id == "guest = true": guest = True self.validate_macaroon( macaroon, rights, self.hs.config.expire_access_token, user_id=user_id, ) except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError): raise AuthError( self.TOKEN_NOT_FOUND_HTTP_STATUS, "Invalid macaroon passed.", errcode=Codes.UNKNOWN_TOKEN ) if not has_expiry and rights == "access": self.token_cache[token] = (user_id, guest) return user_id, guest def get_user_id_from_macaroon(self, macaroon): """Retrieve the user_id given by the caveats on the macaroon. Does *not* validate the macaroon. Args: macaroon (pymacaroons.Macaroon): The macaroon to validate Returns: (str) user id Raises: AuthError if there is no user_id caveat in the macaroon """ user_prefix = "user_id = " for caveat in macaroon.caveats: if caveat.caveat_id.startswith(user_prefix): return caveat.caveat_id[len(user_prefix):] raise AuthError( self.TOKEN_NOT_FOUND_HTTP_STATUS, "No user caveat in macaroon", errcode=Codes.UNKNOWN_TOKEN ) def validate_macaroon(self, macaroon, type_string, verify_expiry, user_id): """ validate that a Macaroon is understood by and was signed by this server. Args: macaroon(pymacaroons.Macaroon): The macaroon to validate type_string(str): The kind of token required (e.g. "access", "delete_pusher") verify_expiry(bool): Whether to verify whether the macaroon has expired. user_id (str): The user_id required """ v = pymacaroons.Verifier() # the verifier runs a test for every caveat on the macaroon, to check # that it is met for the current request. Each caveat must match at # least one of the predicates specified by satisfy_exact or # specify_general. v.satisfy_exact("gen = 1") v.satisfy_exact("type = " + type_string) v.satisfy_exact("user_id = %s" % user_id) v.satisfy_exact("guest = true") # verify_expiry should really always be True, but there exist access # tokens in the wild which expire when they should not, so we can't # enforce expiry yet (so we have to allow any caveat starting with # 'time < ' in access tokens). # # On the other hand, short-term login tokens (as used by CAS login, for # example) have an expiry time which we do want to enforce. if verify_expiry: v.satisfy_general(self._verify_expiry) else: v.satisfy_general(lambda c: c.startswith("time < ")) # access_tokens include a nonce for uniqueness: any value is acceptable v.satisfy_general(lambda c: c.startswith("nonce = ")) v.verify(macaroon, self.hs.config.macaroon_secret_key) def _verify_expiry(self, caveat): prefix = "time < " if not caveat.startswith(prefix): return False expiry = int(caveat[len(prefix):]) now = self.hs.get_clock().time_msec() return now < expiry @defer.inlineCallbacks def _look_up_user_by_access_token(self, token): ret = yield self.store.get_user_by_access_token(token) if not ret: logger.warn("Unrecognised access token - not in store: %s" % (token,)) raise AuthError( self.TOKEN_NOT_FOUND_HTTP_STATUS, "Unrecognised access token.", errcode=Codes.UNKNOWN_TOKEN ) # we use ret.get() below because *lots* of unit tests stub out # get_user_by_access_token in a way where it only returns a couple of # the fields. user_info = { "user": UserID.from_string(ret.get("name")), "token_id": ret.get("token_id", None), "is_guest": False, "device_id": ret.get("device_id"), } defer.returnValue(user_info) def get_appservice_by_req(self, request): try: token = get_access_token_from_request( request, self.TOKEN_NOT_FOUND_HTTP_STATUS ) service = self.store.get_app_service_by_token(token) if not service: logger.warn("Unrecognised appservice access token: %s" % (token,)) raise AuthError( self.TOKEN_NOT_FOUND_HTTP_STATUS, "Unrecognised access token.", errcode=Codes.UNKNOWN_TOKEN ) request.authenticated_entity = service.sender return defer.succeed(service) except KeyError: raise AuthError( self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token." ) def is_server_admin(self, user): """ Check if the given user is a local server admin. Args: user (str): mxid of user to check Returns: bool: True if the user is an admin """ return self.store.is_server_admin(user) @defer.inlineCallbacks def add_auth_events(self, builder, context): auth_ids = yield self.compute_auth_events(builder, context.prev_state_ids) auth_events_entries = yield self.store.add_event_hashes( auth_ids ) builder.auth_events = auth_events_entries @defer.inlineCallbacks def compute_auth_events(self, event, current_state_ids, for_verification=False): if event.type == EventTypes.Create: defer.returnValue([]) auth_ids = [] key = (EventTypes.PowerLevels, "", ) power_level_event_id = current_state_ids.get(key) if power_level_event_id: auth_ids.append(power_level_event_id) key = (EventTypes.JoinRules, "", ) join_rule_event_id = current_state_ids.get(key) key = (EventTypes.Member, event.user_id, ) member_event_id = current_state_ids.get(key) key = (EventTypes.Create, "", ) create_event_id = current_state_ids.get(key) if create_event_id: auth_ids.append(create_event_id) if join_rule_event_id: join_rule_event = yield self.store.get_event(join_rule_event_id) join_rule = join_rule_event.content.get("join_rule") is_public = join_rule == JoinRules.PUBLIC if join_rule else False else: is_public = False if event.type == EventTypes.Member: e_type = event.content["membership"] if e_type in [Membership.JOIN, Membership.INVITE]: if join_rule_event_id: auth_ids.append(join_rule_event_id) if e_type == Membership.JOIN: if member_event_id and not is_public: auth_ids.append(member_event_id) else: if member_event_id: auth_ids.append(member_event_id) if for_verification: key = (EventTypes.Member, event.state_key, ) existing_event_id = current_state_ids.get(key) if existing_event_id: auth_ids.append(existing_event_id) if e_type == Membership.INVITE: if "third_party_invite" in event.content: key = ( EventTypes.ThirdPartyInvite, event.content["third_party_invite"]["signed"]["token"] ) third_party_invite_id = current_state_ids.get(key) if third_party_invite_id: auth_ids.append(third_party_invite_id) elif member_event_id: member_event = yield self.store.get_event(member_event_id) if member_event.content["membership"] == Membership.JOIN: auth_ids.append(member_event.event_id) defer.returnValue(auth_ids) def check_redaction(self, event, auth_events): """Check whether the event sender is allowed to redact the target event. Returns: True if the the sender is allowed to redact the target event if the target event was created by them. False if the sender is allowed to redact the target event with no further checks. Raises: AuthError if the event sender is definitely not allowed to redact the target event. """ return event_auth.check_redaction(event, auth_events) @defer.inlineCallbacks def check_can_change_room_list(self, room_id, user): """Check if the user is allowed to edit the room's entry in the published room list. Args: room_id (str) user (UserID) """ is_admin = yield self.is_server_admin(user) if is_admin: defer.returnValue(True) user_id = user.to_string() yield self.check_joined_room(room_id, user_id) # We currently require the user is a "moderator" in the room. We do this # by checking if they would (theoretically) be able to change the # m.room.aliases events power_level_event = yield self.state.get_current_state( room_id, EventTypes.PowerLevels, "" ) auth_events = {} if power_level_event: auth_events[(EventTypes.PowerLevels, "")] = power_level_event send_level = event_auth.get_send_level( EventTypes.Aliases, "", auth_events ) user_level = event_auth.get_user_power_level(user_id, auth_events) if user_level < send_level: raise AuthError( 403, "This server requires you to be a moderator in the room to" " edit its room list entry" ) def has_access_token(request): """Checks if the request has an access_token. Returns: bool: False if no access_token was given, True otherwise. """ query_params = request.args.get("access_token") auth_headers = request.requestHeaders.getRawHeaders("Authorization") return bool(query_params) or bool(auth_headers) def get_access_token_from_request(request, token_not_found_http_status=401): """Extracts the access_token from the request. Args: request: The http request. token_not_found_http_status(int): The HTTP status code to set in the AuthError if the token isn't found. This is used in some of the legacy APIs to change the status code to 403 from the default of 401 since some of the old clients depended on auth errors returning 403. Returns: str: The access_token Raises: AuthError: If there isn't an access_token in the request. """ auth_headers = request.requestHeaders.getRawHeaders("Authorization") query_params = request.args.get("access_token") if auth_headers: # Try the get the access_token from a "Authorization: Bearer" # header if query_params is not None: raise AuthError( token_not_found_http_status, "Mixing Authorization headers and access_token query parameters.", errcode=Codes.MISSING_TOKEN, ) if len(auth_headers) > 1: raise AuthError( token_not_found_http_status, "Too many Authorization headers.", errcode=Codes.MISSING_TOKEN, ) parts = auth_headers[0].split(" ") if parts[0] == "Bearer" and len(parts) == 2: return parts[1] else: raise AuthError( token_not_found_http_status, "Invalid Authorization header.", errcode=Codes.MISSING_TOKEN, ) else: # Try to get the access_token from the query params. if not query_params: raise AuthError( token_not_found_http_status, "Missing access token.", errcode=Codes.MISSING_TOKEN ) return query_params[0] synapse-0.24.0/synapse/api/constants.py000066400000000000000000000046731317335640100201150ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Contains constants from the specification.""" class Membership(object): """Represents the membership states of a user in a room.""" INVITE = u"invite" JOIN = u"join" KNOCK = u"knock" LEAVE = u"leave" BAN = u"ban" LIST = (INVITE, JOIN, KNOCK, LEAVE, BAN) class PresenceState(object): """Represents the presence state of a user.""" OFFLINE = u"offline" UNAVAILABLE = u"unavailable" ONLINE = u"online" class JoinRules(object): PUBLIC = u"public" KNOCK = u"knock" INVITE = u"invite" PRIVATE = u"private" class LoginType(object): PASSWORD = u"m.login.password" EMAIL_IDENTITY = u"m.login.email.identity" MSISDN = u"m.login.msisdn" RECAPTCHA = u"m.login.recaptcha" DUMMY = u"m.login.dummy" # Only for C/S API v1 APPLICATION_SERVICE = u"m.login.application_service" SHARED_SECRET = u"org.matrix.login.shared_secret" class EventTypes(object): Member = "m.room.member" Create = "m.room.create" JoinRules = "m.room.join_rules" PowerLevels = "m.room.power_levels" Aliases = "m.room.aliases" Redaction = "m.room.redaction" ThirdPartyInvite = "m.room.third_party_invite" RoomHistoryVisibility = "m.room.history_visibility" CanonicalAlias = "m.room.canonical_alias" RoomAvatar = "m.room.avatar" GuestAccess = "m.room.guest_access" # These are used for validation Message = "m.room.message" Topic = "m.room.topic" Name = "m.room.name" class RejectedReason(object): AUTH_ERROR = "auth_error" REPLACED = "replaced" NOT_ANCESTOR = "not_ancestor" class RoomCreationPreset(object): PRIVATE_CHAT = "private_chat" PUBLIC_CHAT = "public_chat" TRUSTED_PRIVATE_CHAT = "trusted_private_chat" class ThirdPartyEntityKind(object): USER = "user" LOCATION = "location" synapse-0.24.0/synapse/api/errors.py000066400000000000000000000231731317335640100174110ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Contains exceptions and error codes.""" import json import logging logger = logging.getLogger(__name__) class Codes(object): UNRECOGNIZED = "M_UNRECOGNIZED" UNAUTHORIZED = "M_UNAUTHORIZED" FORBIDDEN = "M_FORBIDDEN" BAD_JSON = "M_BAD_JSON" NOT_JSON = "M_NOT_JSON" USER_IN_USE = "M_USER_IN_USE" ROOM_IN_USE = "M_ROOM_IN_USE" BAD_PAGINATION = "M_BAD_PAGINATION" BAD_STATE = "M_BAD_STATE" UNKNOWN = "M_UNKNOWN" NOT_FOUND = "M_NOT_FOUND" MISSING_TOKEN = "M_MISSING_TOKEN" UNKNOWN_TOKEN = "M_UNKNOWN_TOKEN" GUEST_ACCESS_FORBIDDEN = "M_GUEST_ACCESS_FORBIDDEN" LIMIT_EXCEEDED = "M_LIMIT_EXCEEDED" CAPTCHA_NEEDED = "M_CAPTCHA_NEEDED" CAPTCHA_INVALID = "M_CAPTCHA_INVALID" MISSING_PARAM = "M_MISSING_PARAM" INVALID_PARAM = "M_INVALID_PARAM" TOO_LARGE = "M_TOO_LARGE" EXCLUSIVE = "M_EXCLUSIVE" THREEPID_AUTH_FAILED = "M_THREEPID_AUTH_FAILED" THREEPID_IN_USE = "M_THREEPID_IN_USE" THREEPID_NOT_FOUND = "M_THREEPID_NOT_FOUND" INVALID_USERNAME = "M_INVALID_USERNAME" SERVER_NOT_TRUSTED = "M_SERVER_NOT_TRUSTED" class CodeMessageException(RuntimeError): """An exception with integer code and message string attributes. Attributes: code (int): HTTP error code msg (str): string describing the error """ def __init__(self, code, msg): super(CodeMessageException, self).__init__("%d: %s" % (code, msg)) self.code = code self.msg = msg def error_dict(self): return cs_error(self.msg) class MatrixCodeMessageException(CodeMessageException): """An error from a general matrix endpoint, eg. from a proxied Matrix API call. Attributes: errcode (str): Matrix error code e.g 'M_FORBIDDEN' """ def __init__(self, code, msg, errcode=Codes.UNKNOWN): super(MatrixCodeMessageException, self).__init__(code, msg) self.errcode = errcode class SynapseError(CodeMessageException): """A base exception type for matrix errors which have an errcode and error message (as well as an HTTP status code). Attributes: errcode (str): Matrix error code e.g 'M_FORBIDDEN' """ def __init__(self, code, msg, errcode=Codes.UNKNOWN): """Constructs a synapse error. Args: code (int): The integer error code (an HTTP response code) msg (str): The human-readable error message. errcode (str): The matrix error code e.g 'M_FORBIDDEN' """ super(SynapseError, self).__init__(code, msg) self.errcode = errcode def error_dict(self): return cs_error( self.msg, self.errcode, ) @classmethod def from_http_response_exception(cls, err): """Make a SynapseError based on an HTTPResponseException This is useful when a proxied request has failed, and we need to decide how to map the failure onto a matrix error to send back to the client. An attempt is made to parse the body of the http response as a matrix error. If that succeeds, the errcode and error message from the body are used as the errcode and error message in the new synapse error. Otherwise, the errcode is set to M_UNKNOWN, and the error message is set to the reason code from the HTTP response. Args: err (HttpResponseException): Returns: SynapseError: """ # try to parse the body as json, to get better errcode/msg, but # default to M_UNKNOWN with the HTTP status as the error text try: j = json.loads(err.response) except ValueError: j = {} errcode = j.get('errcode', Codes.UNKNOWN) errmsg = j.get('error', err.msg) res = SynapseError(err.code, errmsg, errcode) return res class RegistrationError(SynapseError): """An error raised when a registration event fails.""" pass class UnrecognizedRequestError(SynapseError): """An error indicating we don't understand the request you're trying to make""" def __init__(self, *args, **kwargs): if "errcode" not in kwargs: kwargs["errcode"] = Codes.UNRECOGNIZED message = None if len(args) == 0: message = "Unrecognized request" else: message = args[0] super(UnrecognizedRequestError, self).__init__( 400, message, **kwargs ) class NotFoundError(SynapseError): """An error indicating we can't find the thing you asked for""" def __init__(self, msg="Not found", errcode=Codes.NOT_FOUND): super(NotFoundError, self).__init__( 404, msg, errcode=errcode ) class AuthError(SynapseError): """An error raised when there was a problem authorising an event.""" def __init__(self, *args, **kwargs): if "errcode" not in kwargs: kwargs["errcode"] = Codes.FORBIDDEN super(AuthError, self).__init__(*args, **kwargs) class EventSizeError(SynapseError): """An error raised when an event is too big.""" def __init__(self, *args, **kwargs): if "errcode" not in kwargs: kwargs["errcode"] = Codes.TOO_LARGE super(EventSizeError, self).__init__(413, *args, **kwargs) class EventStreamError(SynapseError): """An error raised when there a problem with the event stream.""" def __init__(self, *args, **kwargs): if "errcode" not in kwargs: kwargs["errcode"] = Codes.BAD_PAGINATION super(EventStreamError, self).__init__(*args, **kwargs) class LoginError(SynapseError): """An error raised when there was a problem logging in.""" pass class StoreError(SynapseError): """An error raised when there was a problem storing some data.""" pass class InvalidCaptchaError(SynapseError): def __init__(self, code=400, msg="Invalid captcha.", error_url=None, errcode=Codes.CAPTCHA_INVALID): super(InvalidCaptchaError, self).__init__(code, msg, errcode) self.error_url = error_url def error_dict(self): return cs_error( self.msg, self.errcode, error_url=self.error_url, ) class LimitExceededError(SynapseError): """A client has sent too many requests and is being throttled. """ def __init__(self, code=429, msg="Too Many Requests", retry_after_ms=None, errcode=Codes.LIMIT_EXCEEDED): super(LimitExceededError, self).__init__(code, msg, errcode) self.retry_after_ms = retry_after_ms def error_dict(self): return cs_error( self.msg, self.errcode, retry_after_ms=self.retry_after_ms, ) def cs_exception(exception): if isinstance(exception, CodeMessageException): return exception.error_dict() else: logger.error("Unknown exception type: %s", type(exception)) return {} def cs_error(msg, code=Codes.UNKNOWN, **kwargs): """ Utility method for constructing an error response for client-server interactions. Args: msg (str): The error message. code (int): The error code. kwargs : Additional keys to add to the response. Returns: A dict representing the error response JSON. """ err = {"error": msg, "errcode": code} for key, value in kwargs.iteritems(): err[key] = value return err class FederationError(RuntimeError): """ This class is used to inform remote home servers about erroneous PDUs they sent us. FATAL: The remote server could not interpret the source event. (e.g., it was missing a required field) ERROR: The remote server interpreted the event, but it failed some other check (e.g. auth) WARN: The remote server accepted the event, but believes some part of it is wrong (e.g., it referred to an invalid event) """ def __init__(self, level, code, reason, affected, source=None): if level not in ["FATAL", "ERROR", "WARN"]: raise ValueError("Level is not valid: %s" % (level,)) self.level = level self.code = code self.reason = reason self.affected = affected self.source = source msg = "%s %s: %s" % (level, code, reason,) super(FederationError, self).__init__(msg) def get_dict(self): return { "level": self.level, "code": self.code, "reason": self.reason, "affected": self.affected, "source": self.source if self.source else self.affected, } class HttpResponseException(CodeMessageException): """ Represents an HTTP-level failure of an outbound request Attributes: response (str): body of response """ def __init__(self, code, msg, response): """ Args: code (int): HTTP status code msg (str): reason phrase from HTTP response status line response (str): body of response """ super(HttpResponseException, self).__init__(code, msg) self.response = response synapse-0.24.0/synapse/api/filtering.py000066400000000000000000000317261317335640100200630ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.api.errors import SynapseError from synapse.storage.presence import UserPresenceState from synapse.types import UserID, RoomID from twisted.internet import defer import ujson as json import jsonschema from jsonschema import FormatChecker FILTER_SCHEMA = { "additionalProperties": False, "type": "object", "properties": { "limit": { "type": "number" }, "senders": { "$ref": "#/definitions/user_id_array" }, "not_senders": { "$ref": "#/definitions/user_id_array" }, # TODO: We don't limit event type values but we probably should... # check types are valid event types "types": { "type": "array", "items": { "type": "string" } }, "not_types": { "type": "array", "items": { "type": "string" } } } } ROOM_FILTER_SCHEMA = { "additionalProperties": False, "type": "object", "properties": { "not_rooms": { "$ref": "#/definitions/room_id_array" }, "rooms": { "$ref": "#/definitions/room_id_array" }, "ephemeral": { "$ref": "#/definitions/room_event_filter" }, "include_leave": { "type": "boolean" }, "state": { "$ref": "#/definitions/room_event_filter" }, "timeline": { "$ref": "#/definitions/room_event_filter" }, "account_data": { "$ref": "#/definitions/room_event_filter" }, } } ROOM_EVENT_FILTER_SCHEMA = { "additionalProperties": False, "type": "object", "properties": { "limit": { "type": "number" }, "senders": { "$ref": "#/definitions/user_id_array" }, "not_senders": { "$ref": "#/definitions/user_id_array" }, "types": { "type": "array", "items": { "type": "string" } }, "not_types": { "type": "array", "items": { "type": "string" } }, "rooms": { "$ref": "#/definitions/room_id_array" }, "not_rooms": { "$ref": "#/definitions/room_id_array" }, "contains_url": { "type": "boolean" } } } USER_ID_ARRAY_SCHEMA = { "type": "array", "items": { "type": "string", "format": "matrix_user_id" } } ROOM_ID_ARRAY_SCHEMA = { "type": "array", "items": { "type": "string", "format": "matrix_room_id" } } USER_FILTER_SCHEMA = { "$schema": "http://json-schema.org/draft-04/schema#", "description": "schema for a Sync filter", "type": "object", "definitions": { "room_id_array": ROOM_ID_ARRAY_SCHEMA, "user_id_array": USER_ID_ARRAY_SCHEMA, "filter": FILTER_SCHEMA, "room_filter": ROOM_FILTER_SCHEMA, "room_event_filter": ROOM_EVENT_FILTER_SCHEMA }, "properties": { "presence": { "$ref": "#/definitions/filter" }, "account_data": { "$ref": "#/definitions/filter" }, "room": { "$ref": "#/definitions/room_filter" }, "event_format": { "type": "string", "enum": ["client", "federation"] }, "event_fields": { "type": "array", "items": { "type": "string", # Don't allow '\\' in event field filters. This makes matching # events a lot easier as we can then use a negative lookbehind # assertion to split '\.' If we allowed \\ then it would # incorrectly split '\\.' See synapse.events.utils.serialize_event "pattern": "^((?!\\\).)*$" } } }, "additionalProperties": False } @FormatChecker.cls_checks('matrix_room_id') def matrix_room_id_validator(room_id_str): return RoomID.from_string(room_id_str) @FormatChecker.cls_checks('matrix_user_id') def matrix_user_id_validator(user_id_str): return UserID.from_string(user_id_str) class Filtering(object): def __init__(self, hs): super(Filtering, self).__init__() self.store = hs.get_datastore() @defer.inlineCallbacks def get_user_filter(self, user_localpart, filter_id): result = yield self.store.get_user_filter(user_localpart, filter_id) defer.returnValue(FilterCollection(result)) def add_user_filter(self, user_localpart, user_filter): self.check_valid_filter(user_filter) return self.store.add_user_filter(user_localpart, user_filter) # TODO(paul): surely we should probably add a delete_user_filter or # replace_user_filter at some point? There's no REST API specified for # them however def check_valid_filter(self, user_filter_json): """Check if the provided filter is valid. This inspects all definitions contained within the filter. Args: user_filter_json(dict): The filter Raises: SynapseError: If the filter is not valid. """ # NB: Filters are the complete json blobs. "Definitions" are an # individual top-level key e.g. public_user_data. Filters are made of # many definitions. try: jsonschema.validate(user_filter_json, USER_FILTER_SCHEMA, format_checker=FormatChecker()) except jsonschema.ValidationError as e: raise SynapseError(400, e.message) class FilterCollection(object): def __init__(self, filter_json): self._filter_json = filter_json room_filter_json = self._filter_json.get("room", {}) self._room_filter = Filter({ k: v for k, v in room_filter_json.items() if k in ("rooms", "not_rooms") }) self._room_timeline_filter = Filter(room_filter_json.get("timeline", {})) self._room_state_filter = Filter(room_filter_json.get("state", {})) self._room_ephemeral_filter = Filter(room_filter_json.get("ephemeral", {})) self._room_account_data = Filter(room_filter_json.get("account_data", {})) self._presence_filter = Filter(filter_json.get("presence", {})) self._account_data = Filter(filter_json.get("account_data", {})) self.include_leave = filter_json.get("room", {}).get( "include_leave", False ) self.event_fields = filter_json.get("event_fields", []) def __repr__(self): return "" % (json.dumps(self._filter_json),) def get_filter_json(self): return self._filter_json def timeline_limit(self): return self._room_timeline_filter.limit() def presence_limit(self): return self._presence_filter.limit() def ephemeral_limit(self): return self._room_ephemeral_filter.limit() def filter_presence(self, events): return self._presence_filter.filter(events) def filter_account_data(self, events): return self._account_data.filter(events) def filter_room_state(self, events): return self._room_state_filter.filter(self._room_filter.filter(events)) def filter_room_timeline(self, events): return self._room_timeline_filter.filter(self._room_filter.filter(events)) def filter_room_ephemeral(self, events): return self._room_ephemeral_filter.filter(self._room_filter.filter(events)) def filter_room_account_data(self, events): return self._room_account_data.filter(self._room_filter.filter(events)) def blocks_all_presence(self): return ( self._presence_filter.filters_all_types() or self._presence_filter.filters_all_senders() ) def blocks_all_room_ephemeral(self): return ( self._room_ephemeral_filter.filters_all_types() or self._room_ephemeral_filter.filters_all_senders() or self._room_ephemeral_filter.filters_all_rooms() ) def blocks_all_room_timeline(self): return ( self._room_timeline_filter.filters_all_types() or self._room_timeline_filter.filters_all_senders() or self._room_timeline_filter.filters_all_rooms() ) class Filter(object): def __init__(self, filter_json): self.filter_json = filter_json self.types = self.filter_json.get("types", None) self.not_types = self.filter_json.get("not_types", []) self.rooms = self.filter_json.get("rooms", None) self.not_rooms = self.filter_json.get("not_rooms", []) self.senders = self.filter_json.get("senders", None) self.not_senders = self.filter_json.get("not_senders", []) self.contains_url = self.filter_json.get("contains_url", None) def filters_all_types(self): return "*" in self.not_types def filters_all_senders(self): return "*" in self.not_senders def filters_all_rooms(self): return "*" in self.not_rooms def check(self, event): """Checks whether the filter matches the given event. Returns: bool: True if the event matches """ # We usually get the full "events" as dictionaries coming through, # except for presence which actually gets passed around as its own # namedtuple type. if isinstance(event, UserPresenceState): sender = event.user_id room_id = None ev_type = "m.presence" is_url = False else: sender = event.get("sender", None) if not sender: # Presence events had their 'sender' in content.user_id, but are # now handled above. We don't know if anything else uses this # form. TODO: Check this and probably remove it. content = event.get("content") # account_data has been allowed to have non-dict content, so # check type first if isinstance(content, dict): sender = content.get("user_id") room_id = event.get("room_id", None) ev_type = event.get("type", None) is_url = "url" in event.get("content", {}) return self.check_fields( room_id, sender, ev_type, is_url, ) def check_fields(self, room_id, sender, event_type, contains_url): """Checks whether the filter matches the given event fields. Returns: bool: True if the event fields match """ literal_keys = { "rooms": lambda v: room_id == v, "senders": lambda v: sender == v, "types": lambda v: _matches_wildcard(event_type, v) } for name, match_func in literal_keys.items(): not_name = "not_%s" % (name,) disallowed_values = getattr(self, not_name) if any(map(match_func, disallowed_values)): return False allowed_values = getattr(self, name) if allowed_values is not None: if not any(map(match_func, allowed_values)): return False contains_url_filter = self.filter_json.get("contains_url") if contains_url_filter is not None: if contains_url_filter != contains_url: return False return True def filter_rooms(self, room_ids): """Apply the 'rooms' filter to a given list of rooms. Args: room_ids (list): A list of room_ids. Returns: list: A list of room_ids that match the filter """ room_ids = set(room_ids) disallowed_rooms = set(self.filter_json.get("not_rooms", [])) room_ids -= disallowed_rooms allowed_rooms = self.filter_json.get("rooms", None) if allowed_rooms is not None: room_ids &= set(allowed_rooms) return room_ids def filter(self, events): return filter(self.check, events) def limit(self): return self.filter_json.get("limit", 10) def _matches_wildcard(actual_value, filter_value): if filter_value.endswith("*"): type_prefix = filter_value[:-1] return actual_value.startswith(type_prefix) else: return actual_value == filter_value DEFAULT_FILTER_COLLECTION = FilterCollection({}) synapse-0.24.0/synapse/api/ratelimiting.py000066400000000000000000000056541317335640100205710ustar00rootroot00000000000000# Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import collections class Ratelimiter(object): """ Ratelimit message sending by user. """ def __init__(self): self.message_counts = collections.OrderedDict() def send_message(self, user_id, time_now_s, msg_rate_hz, burst_count, update=True): """Can the user send a message? Args: user_id: The user sending a message. time_now_s: The time now. msg_rate_hz: The long term number of messages a user can send in a second. burst_count: How many messages the user can send before being limited. update (bool): Whether to update the message rates or not. This is useful to check if a message would be allowed to be sent before its ready to be actually sent. Returns: A pair of a bool indicating if they can send a message now and a time in seconds of when they can next send a message. """ self.prune_message_counts(time_now_s) message_count, time_start, _ignored = self.message_counts.get( user_id, (0., time_now_s, None), ) time_delta = time_now_s - time_start sent_count = message_count - time_delta * msg_rate_hz if sent_count < 0: allowed = True time_start = time_now_s message_count = 1. elif sent_count > burst_count - 1.: allowed = False else: allowed = True message_count += 1 if update: self.message_counts[user_id] = ( message_count, time_start, msg_rate_hz ) if msg_rate_hz > 0: time_allowed = ( time_start + (message_count - burst_count + 1) / msg_rate_hz ) if time_allowed < time_now_s: time_allowed = time_now_s else: time_allowed = -1 return allowed, time_allowed def prune_message_counts(self, time_now_s): for user_id in self.message_counts.keys(): message_count, time_start, msg_rate_hz = ( self.message_counts[user_id] ) time_delta = time_now_s - time_start if message_count - time_delta * msg_rate_hz > 0: break else: del self.message_counts[user_id] synapse-0.24.0/synapse/api/urls.py000066400000000000000000000021041317335640100170510ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Contains the URL paths to prefix various aspects of the server with. """ CLIENT_PREFIX = "/_matrix/client/api/v1" CLIENT_V2_ALPHA_PREFIX = "/_matrix/client/v2_alpha" FEDERATION_PREFIX = "/_matrix/federation/v1" STATIC_PREFIX = "/_matrix/static" WEB_CLIENT_PREFIX = "/_matrix/client" CONTENT_REPO_PREFIX = "/_matrix/content" SERVER_KEY_PREFIX = "/_matrix/key/v1" SERVER_KEY_V2_PREFIX = "/_matrix/key/v2" MEDIA_PREFIX = "/_matrix/media/r0" LEGACY_MEDIA_PREFIX = "/_matrix/media/v1" synapse-0.24.0/synapse/app/000077500000000000000000000000001317335640100155245ustar00rootroot00000000000000synapse-0.24.0/synapse/app/__init__.py000066400000000000000000000020411317335640100176320ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import sys sys.dont_write_bytecode = True from synapse import python_dependencies # noqa: E402 try: python_dependencies.check_requirements() except python_dependencies.MissingRequirementError as e: message = "\n".join([ "Missing Requirement: %s" % (e.message,), "To install run:", " pip install --upgrade --force \"%s\"" % (e.dependency,), "", ]) sys.stderr.writelines(message) sys.exit(1) synapse-0.24.0/synapse/app/_base.py000066400000000000000000000074521317335640100171570ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import logging import sys try: import affinity except: affinity = None from daemonize import Daemonize from synapse.util import PreserveLoggingContext from synapse.util.rlimit import change_resource_limit from twisted.internet import reactor def start_worker_reactor(appname, config): """ Run the reactor in the main process Daemonizes if necessary, and then configures some resources, before starting the reactor. Pulls configuration from the 'worker' settings in 'config'. Args: appname (str): application name which will be sent to syslog config (synapse.config.Config): config object """ logger = logging.getLogger(config.worker_app) start_reactor( appname, config.soft_file_limit, config.gc_thresholds, config.worker_pid_file, config.worker_daemonize, config.worker_cpu_affinity, logger, ) def start_reactor( appname, soft_file_limit, gc_thresholds, pid_file, daemonize, cpu_affinity, logger, ): """ Run the reactor in the main process Daemonizes if necessary, and then configures some resources, before starting the reactor Args: appname (str): application name which will be sent to syslog soft_file_limit (int): gc_thresholds: pid_file (str): name of pid file to write to if daemonize is True daemonize (bool): true to run the reactor in a background process cpu_affinity (int|None): cpu affinity mask logger (logging.Logger): logger instance to pass to Daemonize """ def run(): # make sure that we run the reactor with the sentinel log context, # otherwise other PreserveLoggingContext instances will get confused # and complain when they see the logcontext arbitrarily swapping # between the sentinel and `run` logcontexts. with PreserveLoggingContext(): logger.info("Running") if cpu_affinity is not None: if not affinity: quit_with_error( "Missing package 'affinity' required for cpu_affinity\n" "option\n\n" "Install by running:\n\n" " pip install affinity\n\n" ) logger.info("Setting CPU affinity to %s" % cpu_affinity) affinity.set_process_affinity_mask(0, cpu_affinity) change_resource_limit(soft_file_limit) if gc_thresholds: gc.set_threshold(*gc_thresholds) reactor.run() if daemonize: daemon = Daemonize( app=appname, pid=pid_file, action=run, auto_close_fds=False, verbose=True, logger=logger, ) daemon.start() else: run() def quit_with_error(error_string): message_lines = error_string.split("\n") line_length = max([len(l) for l in message_lines if len(l) < 80]) + 2 sys.stderr.write("*" * line_length + '\n') for line in message_lines: sys.stderr.write(" %s\n" % (line.rstrip(),)) sys.stderr.write("*" * line_length + '\n') sys.exit(1) synapse-0.24.0/synapse/app/appservice.py000066400000000000000000000146431317335640100202470ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import sys import synapse from synapse import events from synapse.app import _base from synapse.config._base import ConfigError from synapse.config.homeserver import HomeServerConfig from synapse.config.logger import setup_logging from synapse.http.site import SynapseSite from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.directory import DirectoryStore from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.registration import SlavedRegistrationStore from synapse.replication.tcp.client import ReplicationClientHandler from synapse.server import HomeServer from synapse.storage.engines import create_engine from synapse.util.httpresourcetree import create_resource_tree from synapse.util.logcontext import LoggingContext, preserve_fn from synapse.util.manhole import manhole from synapse.util.versionstring import get_version_string from twisted.internet import reactor from twisted.web.resource import Resource logger = logging.getLogger("synapse.app.appservice") class AppserviceSlaveStore( DirectoryStore, SlavedEventStore, SlavedApplicationServiceStore, SlavedRegistrationStore, ): pass class AppserviceServer(HomeServer): def get_db_conn(self, run_new_connection=True): # Any param beginning with cp_ is a parameter for adbapi, and should # not be passed to the database engine. db_params = { k: v for k, v in self.db_config.get("args", {}).items() if not k.startswith("cp_") } db_conn = self.database_engine.module.connect(**db_params) if run_new_connection: self.database_engine.on_new_connection(db_conn) return db_conn def setup(self): logger.info("Setting up.") self.datastore = AppserviceSlaveStore(self.get_db_conn(), self) logger.info("Finished setting up.") def _listen_http(self, listener_config): port = listener_config["port"] bind_addresses = listener_config["bind_addresses"] site_tag = listener_config.get("tag", port) resources = {} for res in listener_config["resources"]: for name in res["names"]: if name == "metrics": resources[METRICS_PREFIX] = MetricsResource(self) root_resource = create_resource_tree(resources, Resource()) for address in bind_addresses: reactor.listenTCP( port, SynapseSite( "synapse.access.http.%s" % (site_tag,), site_tag, listener_config, root_resource, ), interface=address ) logger.info("Synapse appservice now listening on port %d", port) def start_listening(self, listeners): for listener in listeners: if listener["type"] == "http": self._listen_http(listener) elif listener["type"] == "manhole": bind_addresses = listener["bind_addresses"] for address in bind_addresses: reactor.listenTCP( listener["port"], manhole( username="matrix", password="rabbithole", globals={"hs": self}, ), interface=address ) else: logger.warn("Unrecognized listener type: %s", listener["type"]) self.get_tcp_replication().start_replication(self) def build_tcp_replication(self): return ASReplicationHandler(self) class ASReplicationHandler(ReplicationClientHandler): def __init__(self, hs): super(ASReplicationHandler, self).__init__(hs.get_datastore()) self.appservice_handler = hs.get_application_service_handler() def on_rdata(self, stream_name, token, rows): super(ASReplicationHandler, self).on_rdata(stream_name, token, rows) if stream_name == "events": max_stream_id = self.store.get_room_max_stream_ordering() preserve_fn( self.appservice_handler.notify_interested_services )(max_stream_id) def start(config_options): try: config = HomeServerConfig.load_config( "Synapse appservice", config_options ) except ConfigError as e: sys.stderr.write("\n" + e.message + "\n") sys.exit(1) assert config.worker_app == "synapse.app.appservice" setup_logging(config, use_worker_options=True) events.USE_FROZEN_DICTS = config.use_frozen_dicts database_engine = create_engine(config.database_config) if config.notify_appservices: sys.stderr.write( "\nThe appservices must be disabled in the main synapse process" "\nbefore they can be run in a separate worker." "\nPlease add ``notify_appservices: false`` to the main config" "\n" ) sys.exit(1) # Force the pushers to start since they will be disabled in the main config config.notify_appservices = True ps = AppserviceServer( config.server_name, db_config=config.database_config, config=config, version_string="Synapse/" + get_version_string(synapse), database_engine=database_engine, ) ps.setup() ps.start_listening(config.worker_listeners) def start(): ps.get_datastore().start_profiling() ps.get_state_handler().start_caching() reactor.callWhenRunning(start) _base.start_worker_reactor("synapse-appservice", config) if __name__ == '__main__': with LoggingContext("main"): start(sys.argv[1:]) synapse-0.24.0/synapse/app/client_reader.py000066400000000000000000000152621317335640100207040ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import sys import synapse from synapse import events from synapse.app import _base from synapse.config._base import ConfigError from synapse.config.homeserver import HomeServerConfig from synapse.config.logger import setup_logging from synapse.crypto import context_factory from synapse.http.server import JsonResource from synapse.http.site import SynapseSite from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.client_ips import SlavedClientIpStore from synapse.replication.slave.storage.directory import DirectoryStore from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.keys import SlavedKeyStore from synapse.replication.slave.storage.registration import SlavedRegistrationStore from synapse.replication.slave.storage.room import RoomStore from synapse.replication.slave.storage.transactions import TransactionStore from synapse.replication.tcp.client import ReplicationClientHandler from synapse.rest.client.v1.room import PublicRoomListRestServlet from synapse.server import HomeServer from synapse.storage.engines import create_engine from synapse.util.httpresourcetree import create_resource_tree from synapse.util.logcontext import LoggingContext from synapse.util.manhole import manhole from synapse.util.versionstring import get_version_string from twisted.internet import reactor from twisted.web.resource import Resource logger = logging.getLogger("synapse.app.client_reader") class ClientReaderSlavedStore( SlavedEventStore, SlavedKeyStore, RoomStore, DirectoryStore, SlavedApplicationServiceStore, SlavedRegistrationStore, TransactionStore, SlavedClientIpStore, BaseSlavedStore, ): pass class ClientReaderServer(HomeServer): def get_db_conn(self, run_new_connection=True): # Any param beginning with cp_ is a parameter for adbapi, and should # not be passed to the database engine. db_params = { k: v for k, v in self.db_config.get("args", {}).items() if not k.startswith("cp_") } db_conn = self.database_engine.module.connect(**db_params) if run_new_connection: self.database_engine.on_new_connection(db_conn) return db_conn def setup(self): logger.info("Setting up.") self.datastore = ClientReaderSlavedStore(self.get_db_conn(), self) logger.info("Finished setting up.") def _listen_http(self, listener_config): port = listener_config["port"] bind_addresses = listener_config["bind_addresses"] site_tag = listener_config.get("tag", port) resources = {} for res in listener_config["resources"]: for name in res["names"]: if name == "metrics": resources[METRICS_PREFIX] = MetricsResource(self) elif name == "client": resource = JsonResource(self, canonical_json=False) PublicRoomListRestServlet(self).register(resource) resources.update({ "/_matrix/client/r0": resource, "/_matrix/client/unstable": resource, "/_matrix/client/v2_alpha": resource, "/_matrix/client/api/v1": resource, }) root_resource = create_resource_tree(resources, Resource()) for address in bind_addresses: reactor.listenTCP( port, SynapseSite( "synapse.access.http.%s" % (site_tag,), site_tag, listener_config, root_resource, ), interface=address ) logger.info("Synapse client reader now listening on port %d", port) def start_listening(self, listeners): for listener in listeners: if listener["type"] == "http": self._listen_http(listener) elif listener["type"] == "manhole": bind_addresses = listener["bind_addresses"] for address in bind_addresses: reactor.listenTCP( listener["port"], manhole( username="matrix", password="rabbithole", globals={"hs": self}, ), interface=address ) else: logger.warn("Unrecognized listener type: %s", listener["type"]) self.get_tcp_replication().start_replication(self) def build_tcp_replication(self): return ReplicationClientHandler(self.get_datastore()) def start(config_options): try: config = HomeServerConfig.load_config( "Synapse client reader", config_options ) except ConfigError as e: sys.stderr.write("\n" + e.message + "\n") sys.exit(1) assert config.worker_app == "synapse.app.client_reader" setup_logging(config, use_worker_options=True) events.USE_FROZEN_DICTS = config.use_frozen_dicts database_engine = create_engine(config.database_config) tls_server_context_factory = context_factory.ServerContextFactory(config) ss = ClientReaderServer( config.server_name, db_config=config.database_config, tls_server_context_factory=tls_server_context_factory, config=config, version_string="Synapse/" + get_version_string(synapse), database_engine=database_engine, ) ss.setup() ss.get_handlers() ss.start_listening(config.worker_listeners) def start(): ss.get_state_handler().start_caching() ss.get_datastore().start_profiling() reactor.callWhenRunning(start) _base.start_worker_reactor("synapse-client-reader", config) if __name__ == '__main__': with LoggingContext("main"): start(sys.argv[1:]) synapse-0.24.0/synapse/app/federation_reader.py000066400000000000000000000141271317335640100215450ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import sys import synapse from synapse import events from synapse.api.urls import FEDERATION_PREFIX from synapse.app import _base from synapse.config._base import ConfigError from synapse.config.homeserver import HomeServerConfig from synapse.config.logger import setup_logging from synapse.crypto import context_factory from synapse.federation.transport.server import TransportLayerServer from synapse.http.site import SynapseSite from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage.directory import DirectoryStore from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.keys import SlavedKeyStore from synapse.replication.slave.storage.room import RoomStore from synapse.replication.slave.storage.transactions import TransactionStore from synapse.replication.tcp.client import ReplicationClientHandler from synapse.server import HomeServer from synapse.storage.engines import create_engine from synapse.util.httpresourcetree import create_resource_tree from synapse.util.logcontext import LoggingContext from synapse.util.manhole import manhole from synapse.util.versionstring import get_version_string from twisted.internet import reactor from twisted.web.resource import Resource logger = logging.getLogger("synapse.app.federation_reader") class FederationReaderSlavedStore( SlavedEventStore, SlavedKeyStore, RoomStore, DirectoryStore, TransactionStore, BaseSlavedStore, ): pass class FederationReaderServer(HomeServer): def get_db_conn(self, run_new_connection=True): # Any param beginning with cp_ is a parameter for adbapi, and should # not be passed to the database engine. db_params = { k: v for k, v in self.db_config.get("args", {}).items() if not k.startswith("cp_") } db_conn = self.database_engine.module.connect(**db_params) if run_new_connection: self.database_engine.on_new_connection(db_conn) return db_conn def setup(self): logger.info("Setting up.") self.datastore = FederationReaderSlavedStore(self.get_db_conn(), self) logger.info("Finished setting up.") def _listen_http(self, listener_config): port = listener_config["port"] bind_addresses = listener_config["bind_addresses"] site_tag = listener_config.get("tag", port) resources = {} for res in listener_config["resources"]: for name in res["names"]: if name == "metrics": resources[METRICS_PREFIX] = MetricsResource(self) elif name == "federation": resources.update({ FEDERATION_PREFIX: TransportLayerServer(self), }) root_resource = create_resource_tree(resources, Resource()) for address in bind_addresses: reactor.listenTCP( port, SynapseSite( "synapse.access.http.%s" % (site_tag,), site_tag, listener_config, root_resource, ), interface=address ) logger.info("Synapse federation reader now listening on port %d", port) def start_listening(self, listeners): for listener in listeners: if listener["type"] == "http": self._listen_http(listener) elif listener["type"] == "manhole": bind_addresses = listener["bind_addresses"] for address in bind_addresses: reactor.listenTCP( listener["port"], manhole( username="matrix", password="rabbithole", globals={"hs": self}, ), interface=address ) else: logger.warn("Unrecognized listener type: %s", listener["type"]) self.get_tcp_replication().start_replication(self) def build_tcp_replication(self): return ReplicationClientHandler(self.get_datastore()) def start(config_options): try: config = HomeServerConfig.load_config( "Synapse federation reader", config_options ) except ConfigError as e: sys.stderr.write("\n" + e.message + "\n") sys.exit(1) assert config.worker_app == "synapse.app.federation_reader" setup_logging(config, use_worker_options=True) events.USE_FROZEN_DICTS = config.use_frozen_dicts database_engine = create_engine(config.database_config) tls_server_context_factory = context_factory.ServerContextFactory(config) ss = FederationReaderServer( config.server_name, db_config=config.database_config, tls_server_context_factory=tls_server_context_factory, config=config, version_string="Synapse/" + get_version_string(synapse), database_engine=database_engine, ) ss.setup() ss.get_handlers() ss.start_listening(config.worker_listeners) def start(): ss.get_state_handler().start_caching() ss.get_datastore().start_profiling() reactor.callWhenRunning(start) _base.start_worker_reactor("synapse-federation-reader", config) if __name__ == '__main__': with LoggingContext("main"): start(sys.argv[1:]) synapse-0.24.0/synapse/app/federation_sender.py000066400000000000000000000241251317335640100215620ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import sys import synapse from synapse import events from synapse.app import _base from synapse.config._base import ConfigError from synapse.config.homeserver import HomeServerConfig from synapse.config.logger import setup_logging from synapse.crypto import context_factory from synapse.federation import send_queue from synapse.http.site import SynapseSite from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore from synapse.replication.slave.storage.devices import SlavedDeviceStore from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.presence import SlavedPresenceStore from synapse.replication.slave.storage.receipts import SlavedReceiptsStore from synapse.replication.slave.storage.registration import SlavedRegistrationStore from synapse.replication.slave.storage.transactions import TransactionStore from synapse.replication.tcp.client import ReplicationClientHandler from synapse.server import HomeServer from synapse.storage.engines import create_engine from synapse.util.async import Linearizer from synapse.util.httpresourcetree import create_resource_tree from synapse.util.logcontext import LoggingContext, preserve_fn from synapse.util.manhole import manhole from synapse.util.versionstring import get_version_string from twisted.internet import defer, reactor from twisted.web.resource import Resource logger = logging.getLogger("synapse.app.federation_sender") class FederationSenderSlaveStore( SlavedDeviceInboxStore, TransactionStore, SlavedReceiptsStore, SlavedEventStore, SlavedRegistrationStore, SlavedDeviceStore, SlavedPresenceStore, ): def __init__(self, db_conn, hs): super(FederationSenderSlaveStore, self).__init__(db_conn, hs) # We pull out the current federation stream position now so that we # always have a known value for the federation position in memory so # that we don't have to bounce via a deferred once when we start the # replication streams. self.federation_out_pos_startup = self._get_federation_out_pos(db_conn) def _get_federation_out_pos(self, db_conn): sql = ( "SELECT stream_id FROM federation_stream_position" " WHERE type = ?" ) sql = self.database_engine.convert_param_style(sql) txn = db_conn.cursor() txn.execute(sql, ("federation",)) rows = txn.fetchall() txn.close() return rows[0][0] if rows else -1 class FederationSenderServer(HomeServer): def get_db_conn(self, run_new_connection=True): # Any param beginning with cp_ is a parameter for adbapi, and should # not be passed to the database engine. db_params = { k: v for k, v in self.db_config.get("args", {}).items() if not k.startswith("cp_") } db_conn = self.database_engine.module.connect(**db_params) if run_new_connection: self.database_engine.on_new_connection(db_conn) return db_conn def setup(self): logger.info("Setting up.") self.datastore = FederationSenderSlaveStore(self.get_db_conn(), self) logger.info("Finished setting up.") def _listen_http(self, listener_config): port = listener_config["port"] bind_addresses = listener_config["bind_addresses"] site_tag = listener_config.get("tag", port) resources = {} for res in listener_config["resources"]: for name in res["names"]: if name == "metrics": resources[METRICS_PREFIX] = MetricsResource(self) root_resource = create_resource_tree(resources, Resource()) for address in bind_addresses: reactor.listenTCP( port, SynapseSite( "synapse.access.http.%s" % (site_tag,), site_tag, listener_config, root_resource, ), interface=address ) logger.info("Synapse federation_sender now listening on port %d", port) def start_listening(self, listeners): for listener in listeners: if listener["type"] == "http": self._listen_http(listener) elif listener["type"] == "manhole": bind_addresses = listener["bind_addresses"] for address in bind_addresses: reactor.listenTCP( listener["port"], manhole( username="matrix", password="rabbithole", globals={"hs": self}, ), interface=address ) else: logger.warn("Unrecognized listener type: %s", listener["type"]) self.get_tcp_replication().start_replication(self) def build_tcp_replication(self): return FederationSenderReplicationHandler(self) class FederationSenderReplicationHandler(ReplicationClientHandler): def __init__(self, hs): super(FederationSenderReplicationHandler, self).__init__(hs.get_datastore()) self.send_handler = FederationSenderHandler(hs, self) def on_rdata(self, stream_name, token, rows): super(FederationSenderReplicationHandler, self).on_rdata( stream_name, token, rows ) self.send_handler.process_replication_rows(stream_name, token, rows) def get_streams_to_replicate(self): args = super(FederationSenderReplicationHandler, self).get_streams_to_replicate() args.update(self.send_handler.stream_positions()) return args def start(config_options): try: config = HomeServerConfig.load_config( "Synapse federation sender", config_options ) except ConfigError as e: sys.stderr.write("\n" + e.message + "\n") sys.exit(1) assert config.worker_app == "synapse.app.federation_sender" setup_logging(config, use_worker_options=True) events.USE_FROZEN_DICTS = config.use_frozen_dicts database_engine = create_engine(config.database_config) if config.send_federation: sys.stderr.write( "\nThe send_federation must be disabled in the main synapse process" "\nbefore they can be run in a separate worker." "\nPlease add ``send_federation: false`` to the main config" "\n" ) sys.exit(1) # Force the pushers to start since they will be disabled in the main config config.send_federation = True tls_server_context_factory = context_factory.ServerContextFactory(config) ps = FederationSenderServer( config.server_name, db_config=config.database_config, tls_server_context_factory=tls_server_context_factory, config=config, version_string="Synapse/" + get_version_string(synapse), database_engine=database_engine, ) ps.setup() ps.start_listening(config.worker_listeners) def start(): ps.get_datastore().start_profiling() ps.get_state_handler().start_caching() reactor.callWhenRunning(start) _base.start_worker_reactor("synapse-federation-sender", config) class FederationSenderHandler(object): """Processes the replication stream and forwards the appropriate entries to the federation sender. """ def __init__(self, hs, replication_client): self.store = hs.get_datastore() self.federation_sender = hs.get_federation_sender() self.replication_client = replication_client self.federation_position = self.store.federation_out_pos_startup self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer") self._last_ack = self.federation_position self._room_serials = {} self._room_typing = {} def on_start(self): # There may be some events that are persisted but haven't been sent, # so send them now. self.federation_sender.notify_new_events( self.store.get_room_max_stream_ordering() ) def stream_positions(self): return {"federation": self.federation_position} def process_replication_rows(self, stream_name, token, rows): # The federation stream contains things that we want to send out, e.g. # presence, typing, etc. if stream_name == "federation": send_queue.process_rows_for_federation(self.federation_sender, rows) preserve_fn(self.update_token)(token) # We also need to poke the federation sender when new events happen elif stream_name == "events": self.federation_sender.notify_new_events(token) @defer.inlineCallbacks def update_token(self, token): self.federation_position = token # We linearize here to ensure we don't have races updating the token with (yield self._fed_position_linearizer.queue(None)): if self._last_ack < self.federation_position: yield self.store.update_federation_out_pos( "federation", self.federation_position ) # We ACK this token over replication so that the master can drop # its in memory queues self.replication_client.send_federation_ack(self.federation_position) self._last_ack = self.federation_position if __name__ == '__main__': with LoggingContext("main"): start(sys.argv[1:]) synapse-0.24.0/synapse/app/frontend_proxy.py000066400000000000000000000207231317335640100211620ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import sys import synapse from synapse import events from synapse.api.errors import SynapseError from synapse.app import _base from synapse.config._base import ConfigError from synapse.config.homeserver import HomeServerConfig from synapse.config.logger import setup_logging from synapse.crypto import context_factory from synapse.http.server import JsonResource from synapse.http.servlet import ( RestServlet, parse_json_object_from_request, ) from synapse.http.site import SynapseSite from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.client_ips import SlavedClientIpStore from synapse.replication.slave.storage.devices import SlavedDeviceStore from synapse.replication.slave.storage.registration import SlavedRegistrationStore from synapse.replication.tcp.client import ReplicationClientHandler from synapse.rest.client.v2_alpha._base import client_v2_patterns from synapse.server import HomeServer from synapse.storage.engines import create_engine from synapse.util.httpresourcetree import create_resource_tree from synapse.util.logcontext import LoggingContext from synapse.util.manhole import manhole from synapse.util.versionstring import get_version_string from twisted.internet import defer, reactor from twisted.web.resource import Resource logger = logging.getLogger("synapse.app.frontend_proxy") class KeyUploadServlet(RestServlet): PATTERNS = client_v2_patterns("/keys/upload(/(?P[^/]+))?$", releases=()) def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): server """ super(KeyUploadServlet, self).__init__() self.auth = hs.get_auth() self.store = hs.get_datastore() self.http_client = hs.get_simple_http_client() self.main_uri = hs.config.worker_main_http_uri @defer.inlineCallbacks def on_POST(self, request, device_id): requester = yield self.auth.get_user_by_req(request, allow_guest=True) user_id = requester.user.to_string() body = parse_json_object_from_request(request) if device_id is not None: # passing the device_id here is deprecated; however, we allow it # for now for compatibility with older clients. if (requester.device_id is not None and device_id != requester.device_id): logger.warning("Client uploading keys for a different device " "(logged in as %s, uploading for %s)", requester.device_id, device_id) else: device_id = requester.device_id if device_id is None: raise SynapseError( 400, "To upload keys, you must pass device_id when authenticating" ) if body: # They're actually trying to upload something, proxy to main synapse. result = yield self.http_client.post_json_get_json( self.main_uri + request.uri, body, ) defer.returnValue((200, result)) else: # Just interested in counts. result = yield self.store.count_e2e_one_time_keys(user_id, device_id) defer.returnValue((200, {"one_time_key_counts": result})) class FrontendProxySlavedStore( SlavedDeviceStore, SlavedClientIpStore, SlavedApplicationServiceStore, SlavedRegistrationStore, BaseSlavedStore, ): pass class FrontendProxyServer(HomeServer): def get_db_conn(self, run_new_connection=True): # Any param beginning with cp_ is a parameter for adbapi, and should # not be passed to the database engine. db_params = { k: v for k, v in self.db_config.get("args", {}).items() if not k.startswith("cp_") } db_conn = self.database_engine.module.connect(**db_params) if run_new_connection: self.database_engine.on_new_connection(db_conn) return db_conn def setup(self): logger.info("Setting up.") self.datastore = FrontendProxySlavedStore(self.get_db_conn(), self) logger.info("Finished setting up.") def _listen_http(self, listener_config): port = listener_config["port"] bind_addresses = listener_config["bind_addresses"] site_tag = listener_config.get("tag", port) resources = {} for res in listener_config["resources"]: for name in res["names"]: if name == "metrics": resources[METRICS_PREFIX] = MetricsResource(self) elif name == "client": resource = JsonResource(self, canonical_json=False) KeyUploadServlet(self).register(resource) resources.update({ "/_matrix/client/r0": resource, "/_matrix/client/unstable": resource, "/_matrix/client/v2_alpha": resource, "/_matrix/client/api/v1": resource, }) root_resource = create_resource_tree(resources, Resource()) for address in bind_addresses: reactor.listenTCP( port, SynapseSite( "synapse.access.http.%s" % (site_tag,), site_tag, listener_config, root_resource, ), interface=address ) logger.info("Synapse client reader now listening on port %d", port) def start_listening(self, listeners): for listener in listeners: if listener["type"] == "http": self._listen_http(listener) elif listener["type"] == "manhole": bind_addresses = listener["bind_addresses"] for address in bind_addresses: reactor.listenTCP( listener["port"], manhole( username="matrix", password="rabbithole", globals={"hs": self}, ), interface=address ) else: logger.warn("Unrecognized listener type: %s", listener["type"]) self.get_tcp_replication().start_replication(self) def build_tcp_replication(self): return ReplicationClientHandler(self.get_datastore()) def start(config_options): try: config = HomeServerConfig.load_config( "Synapse frontend proxy", config_options ) except ConfigError as e: sys.stderr.write("\n" + e.message + "\n") sys.exit(1) assert config.worker_app == "synapse.app.frontend_proxy" assert config.worker_main_http_uri is not None setup_logging(config, use_worker_options=True) events.USE_FROZEN_DICTS = config.use_frozen_dicts database_engine = create_engine(config.database_config) tls_server_context_factory = context_factory.ServerContextFactory(config) ss = FrontendProxyServer( config.server_name, db_config=config.database_config, tls_server_context_factory=tls_server_context_factory, config=config, version_string="Synapse/" + get_version_string(synapse), database_engine=database_engine, ) ss.setup() ss.get_handlers() ss.start_listening(config.worker_listeners) def start(): ss.get_state_handler().start_caching() ss.get_datastore().start_profiling() reactor.callWhenRunning(start) _base.start_worker_reactor("synapse-frontend-proxy", config) if __name__ == '__main__': with LoggingContext("main"): start(sys.argv[1:]) synapse-0.24.0/synapse/app/homeserver.py000077500000000000000000000377741317335640100203020ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import logging import os import sys import synapse import synapse.config.logger from synapse import events from synapse.api.urls import CONTENT_REPO_PREFIX, FEDERATION_PREFIX, \ LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, SERVER_KEY_PREFIX, SERVER_KEY_V2_PREFIX, \ STATIC_PREFIX, WEB_CLIENT_PREFIX from synapse.app import _base from synapse.app._base import quit_with_error from synapse.config._base import ConfigError from synapse.config.homeserver import HomeServerConfig from synapse.crypto import context_factory from synapse.federation.transport.server import TransportLayerServer from synapse.http.server import RootRedirect from synapse.http.site import SynapseSite from synapse.metrics import register_memory_metrics from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, \ check_requirements from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory from synapse.rest import ClientRestResource from synapse.rest.key.v1.server_key_resource import LocalKey from synapse.rest.key.v2 import KeyApiV2Resource from synapse.rest.media.v0.content_repository import ContentRepoResource from synapse.rest.media.v1.media_repository import MediaRepositoryResource from synapse.server import HomeServer from synapse.storage import are_all_users_on_domain from synapse.storage.engines import IncorrectDatabaseSetup, create_engine from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database from synapse.util.httpresourcetree import create_resource_tree from synapse.util.logcontext import LoggingContext from synapse.util.manhole import manhole from synapse.util.rlimit import change_resource_limit from synapse.util.versionstring import get_version_string from twisted.application import service from twisted.internet import defer, reactor from twisted.web.resource import EncodingResourceWrapper, Resource from twisted.web.server import GzipEncoderFactory from twisted.web.static import File logger = logging.getLogger("synapse.app.homeserver") def gz_wrap(r): return EncodingResourceWrapper(r, [GzipEncoderFactory()]) def build_resource_for_web_client(hs): webclient_path = hs.get_config().web_client_location if not webclient_path: try: import syweb except ImportError: quit_with_error( "Could not find a webclient.\n\n" "Please either install the matrix-angular-sdk or configure\n" "the location of the source to serve via the configuration\n" "option `web_client_location`\n\n" "To install the `matrix-angular-sdk` via pip, run:\n\n" " pip install '%(dep)s'\n" "\n" "You can also disable hosting of the webclient via the\n" "configuration option `web_client`\n" % {"dep": CONDITIONAL_REQUIREMENTS["web_client"].keys()[0]} ) syweb_path = os.path.dirname(syweb.__file__) webclient_path = os.path.join(syweb_path, "webclient") # GZip is disabled here due to # https://twistedmatrix.com/trac/ticket/7678 # (It can stay enabled for the API resources: they call # write() with the whole body and then finish() straight # after and so do not trigger the bug. # GzipFile was removed in commit 184ba09 # return GzipFile(webclient_path) # TODO configurable? return File(webclient_path) # TODO configurable? class SynapseHomeServer(HomeServer): def _listener_http(self, config, listener_config): port = listener_config["port"] bind_addresses = listener_config["bind_addresses"] tls = listener_config.get("tls", False) site_tag = listener_config.get("tag", port) if tls and config.no_tls: return resources = {} for res in listener_config["resources"]: for name in res["names"]: if name == "client": client_resource = ClientRestResource(self) if res["compress"]: client_resource = gz_wrap(client_resource) resources.update({ "/_matrix/client/api/v1": client_resource, "/_matrix/client/r0": client_resource, "/_matrix/client/unstable": client_resource, "/_matrix/client/v2_alpha": client_resource, "/_matrix/client/versions": client_resource, }) if name == "federation": resources.update({ FEDERATION_PREFIX: TransportLayerServer(self), }) if name in ["static", "client"]: resources.update({ STATIC_PREFIX: File( os.path.join(os.path.dirname(synapse.__file__), "static") ), }) if name in ["media", "federation", "client"]: media_repo = MediaRepositoryResource(self) resources.update({ MEDIA_PREFIX: media_repo, LEGACY_MEDIA_PREFIX: media_repo, CONTENT_REPO_PREFIX: ContentRepoResource( self, self.config.uploads_path ), }) if name in ["keys", "federation"]: resources.update({ SERVER_KEY_PREFIX: LocalKey(self), SERVER_KEY_V2_PREFIX: KeyApiV2Resource(self), }) if name == "webclient": resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self) if name == "metrics" and self.get_config().enable_metrics: resources[METRICS_PREFIX] = MetricsResource(self) if WEB_CLIENT_PREFIX in resources: root_resource = RootRedirect(WEB_CLIENT_PREFIX) else: root_resource = Resource() root_resource = create_resource_tree(resources, root_resource) if tls: for address in bind_addresses: reactor.listenSSL( port, SynapseSite( "synapse.access.https.%s" % (site_tag,), site_tag, listener_config, root_resource, ), self.tls_server_context_factory, interface=address ) else: for address in bind_addresses: reactor.listenTCP( port, SynapseSite( "synapse.access.http.%s" % (site_tag,), site_tag, listener_config, root_resource, ), interface=address ) logger.info("Synapse now listening on port %d", port) def start_listening(self): config = self.get_config() for listener in config.listeners: if listener["type"] == "http": self._listener_http(config, listener) elif listener["type"] == "manhole": bind_addresses = listener["bind_addresses"] for address in bind_addresses: reactor.listenTCP( listener["port"], manhole( username="matrix", password="rabbithole", globals={"hs": self}, ), interface=address ) elif listener["type"] == "replication": bind_addresses = listener["bind_addresses"] for address in bind_addresses: factory = ReplicationStreamProtocolFactory(self) server_listener = reactor.listenTCP( listener["port"], factory, interface=address ) reactor.addSystemEventTrigger( "before", "shutdown", server_listener.stopListening, ) else: logger.warn("Unrecognized listener type: %s", listener["type"]) def run_startup_checks(self, db_conn, database_engine): all_users_native = are_all_users_on_domain( db_conn.cursor(), database_engine, self.hostname ) if not all_users_native: quit_with_error( "Found users in database not native to %s!\n" "You cannot changed a synapse server_name after it's been configured" % (self.hostname,) ) try: database_engine.check_database(db_conn.cursor()) except IncorrectDatabaseSetup as e: quit_with_error(e.message) def get_db_conn(self, run_new_connection=True): # Any param beginning with cp_ is a parameter for adbapi, and should # not be passed to the database engine. db_params = { k: v for k, v in self.db_config.get("args", {}).items() if not k.startswith("cp_") } db_conn = self.database_engine.module.connect(**db_params) if run_new_connection: self.database_engine.on_new_connection(db_conn) return db_conn def setup(config_options): """ Args: config_options_options: The options passed to Synapse. Usually `sys.argv[1:]`. Returns: HomeServer """ try: config = HomeServerConfig.load_or_generate_config( "Synapse Homeserver", config_options, ) except ConfigError as e: sys.stderr.write("\n" + e.message + "\n") sys.exit(1) if not config: # If a config isn't returned, and an exception isn't raised, we're just # generating config files and shouldn't try to continue. sys.exit(0) synapse.config.logger.setup_logging(config, use_worker_options=False) # check any extra requirements we have now we have a config check_requirements(config) version_string = "Synapse/" + get_version_string(synapse) logger.info("Server hostname: %s", config.server_name) logger.info("Server version: %s", version_string) events.USE_FROZEN_DICTS = config.use_frozen_dicts tls_server_context_factory = context_factory.ServerContextFactory(config) database_engine = create_engine(config.database_config) config.database_config["args"]["cp_openfun"] = database_engine.on_new_connection hs = SynapseHomeServer( config.server_name, db_config=config.database_config, tls_server_context_factory=tls_server_context_factory, config=config, version_string=version_string, database_engine=database_engine, ) logger.info("Preparing database: %s...", config.database_config['name']) try: db_conn = hs.get_db_conn(run_new_connection=False) prepare_database(db_conn, database_engine, config=config) database_engine.on_new_connection(db_conn) hs.run_startup_checks(db_conn, database_engine) db_conn.commit() except UpgradeDatabaseException: sys.stderr.write( "\nFailed to upgrade database.\n" "Have you checked for version specific instructions in" " UPGRADES.rst?\n" ) sys.exit(1) logger.info("Database prepared in %s.", config.database_config['name']) hs.setup() hs.start_listening() def start(): hs.get_pusherpool().start() hs.get_state_handler().start_caching() hs.get_datastore().start_profiling() hs.get_datastore().start_doing_background_updates() hs.get_replication_layer().start_get_pdu_cache() register_memory_metrics(hs) reactor.callWhenRunning(start) return hs class SynapseService(service.Service): """A twisted Service class that will start synapse. Used to run synapse via twistd and a .tac. """ def __init__(self, config): self.config = config def startService(self): hs = setup(self.config) change_resource_limit(hs.config.soft_file_limit) if hs.config.gc_thresholds: gc.set_threshold(*hs.config.gc_thresholds) def stopService(self): return self._port.stopListening() def run(hs): PROFILE_SYNAPSE = False if PROFILE_SYNAPSE: def profile(func): from cProfile import Profile from threading import current_thread def profiled(*args, **kargs): profile = Profile() profile.enable() func(*args, **kargs) profile.disable() ident = current_thread().ident profile.dump_stats("/tmp/%s.%s.%i.pstat" % ( hs.hostname, func.__name__, ident )) return profiled from twisted.python.threadpool import ThreadPool ThreadPool._worker = profile(ThreadPool._worker) reactor.run = profile(reactor.run) clock = hs.get_clock() start_time = clock.time() stats = {} @defer.inlineCallbacks def phone_stats_home(): logger.info("Gathering stats for reporting") now = int(hs.get_clock().time()) uptime = int(now - start_time) if uptime < 0: uptime = 0 stats["homeserver"] = hs.config.server_name stats["timestamp"] = now stats["uptime_seconds"] = uptime stats["total_users"] = yield hs.get_datastore().count_all_users() total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users() stats["total_nonbridged_users"] = total_nonbridged_users room_count = yield hs.get_datastore().get_room_count() stats["total_room_count"] = room_count stats["daily_active_users"] = yield hs.get_datastore().count_daily_users() stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms() stats["daily_messages"] = yield hs.get_datastore().count_daily_messages() daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages() stats["daily_sent_messages"] = daily_sent_messages logger.info("Reporting stats to matrix.org: %s" % (stats,)) try: yield hs.get_simple_http_client().put_json( "https://matrix.org/report-usage-stats/push", stats ) except Exception as e: logger.warn("Error reporting stats: %s", e) if hs.config.report_stats: logger.info("Scheduling stats reporting for 3 hour intervals") clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000) # We wait 5 minutes to send the first set of stats as the server can # be quite busy the first few minutes clock.call_later(5 * 60, phone_stats_home) if hs.config.daemonize and hs.config.print_pidfile: print (hs.config.pid_file) _base.start_reactor( "synapse-homeserver", hs.config.soft_file_limit, hs.config.gc_thresholds, hs.config.pid_file, hs.config.daemonize, hs.config.cpu_affinity, logger, ) def main(): with LoggingContext("main"): # check base requirements check_requirements() hs = setup(sys.argv[1:]) run(hs) if __name__ == '__main__': main() synapse-0.24.0/synapse/app/media_repository.py000066400000000000000000000150231317335640100214550ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import sys import synapse from synapse import events from synapse.api.urls import ( CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX ) from synapse.app import _base from synapse.config._base import ConfigError from synapse.config.homeserver import HomeServerConfig from synapse.config.logger import setup_logging from synapse.crypto import context_factory from synapse.http.site import SynapseSite from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.client_ips import SlavedClientIpStore from synapse.replication.slave.storage.registration import SlavedRegistrationStore from synapse.replication.slave.storage.transactions import TransactionStore from synapse.replication.tcp.client import ReplicationClientHandler from synapse.rest.media.v0.content_repository import ContentRepoResource from synapse.rest.media.v1.media_repository import MediaRepositoryResource from synapse.server import HomeServer from synapse.storage.engines import create_engine from synapse.storage.media_repository import MediaRepositoryStore from synapse.util.httpresourcetree import create_resource_tree from synapse.util.logcontext import LoggingContext from synapse.util.manhole import manhole from synapse.util.versionstring import get_version_string from twisted.internet import reactor from twisted.web.resource import Resource logger = logging.getLogger("synapse.app.media_repository") class MediaRepositorySlavedStore( SlavedApplicationServiceStore, SlavedRegistrationStore, SlavedClientIpStore, TransactionStore, BaseSlavedStore, MediaRepositoryStore, ): pass class MediaRepositoryServer(HomeServer): def get_db_conn(self, run_new_connection=True): # Any param beginning with cp_ is a parameter for adbapi, and should # not be passed to the database engine. db_params = { k: v for k, v in self.db_config.get("args", {}).items() if not k.startswith("cp_") } db_conn = self.database_engine.module.connect(**db_params) if run_new_connection: self.database_engine.on_new_connection(db_conn) return db_conn def setup(self): logger.info("Setting up.") self.datastore = MediaRepositorySlavedStore(self.get_db_conn(), self) logger.info("Finished setting up.") def _listen_http(self, listener_config): port = listener_config["port"] bind_addresses = listener_config["bind_addresses"] site_tag = listener_config.get("tag", port) resources = {} for res in listener_config["resources"]: for name in res["names"]: if name == "metrics": resources[METRICS_PREFIX] = MetricsResource(self) elif name == "media": media_repo = MediaRepositoryResource(self) resources.update({ MEDIA_PREFIX: media_repo, LEGACY_MEDIA_PREFIX: media_repo, CONTENT_REPO_PREFIX: ContentRepoResource( self, self.config.uploads_path ), }) root_resource = create_resource_tree(resources, Resource()) for address in bind_addresses: reactor.listenTCP( port, SynapseSite( "synapse.access.http.%s" % (site_tag,), site_tag, listener_config, root_resource, ), interface=address ) logger.info("Synapse media repository now listening on port %d", port) def start_listening(self, listeners): for listener in listeners: if listener["type"] == "http": self._listen_http(listener) elif listener["type"] == "manhole": bind_addresses = listener["bind_addresses"] for address in bind_addresses: reactor.listenTCP( listener["port"], manhole( username="matrix", password="rabbithole", globals={"hs": self}, ), interface=address ) else: logger.warn("Unrecognized listener type: %s", listener["type"]) self.get_tcp_replication().start_replication(self) def build_tcp_replication(self): return ReplicationClientHandler(self.get_datastore()) def start(config_options): try: config = HomeServerConfig.load_config( "Synapse media repository", config_options ) except ConfigError as e: sys.stderr.write("\n" + e.message + "\n") sys.exit(1) assert config.worker_app == "synapse.app.media_repository" setup_logging(config, use_worker_options=True) events.USE_FROZEN_DICTS = config.use_frozen_dicts database_engine = create_engine(config.database_config) tls_server_context_factory = context_factory.ServerContextFactory(config) ss = MediaRepositoryServer( config.server_name, db_config=config.database_config, tls_server_context_factory=tls_server_context_factory, config=config, version_string="Synapse/" + get_version_string(synapse), database_engine=database_engine, ) ss.setup() ss.get_handlers() ss.start_listening(config.worker_listeners) def start(): ss.get_state_handler().start_caching() ss.get_datastore().start_profiling() reactor.callWhenRunning(start) _base.start_worker_reactor("synapse-media-repository", config) if __name__ == '__main__': with LoggingContext("main"): start(sys.argv[1:]) synapse-0.24.0/synapse/app/pusher.py000066400000000000000000000207201317335640100174050ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import sys import synapse from synapse import events from synapse.app import _base from synapse.config._base import ConfigError from synapse.config.homeserver import HomeServerConfig from synapse.config.logger import setup_logging from synapse.http.site import SynapseSite from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.replication.slave.storage.account_data import SlavedAccountDataStore from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.pushers import SlavedPusherStore from synapse.replication.slave.storage.receipts import SlavedReceiptsStore from synapse.replication.tcp.client import ReplicationClientHandler from synapse.server import HomeServer from synapse.storage import DataStore from synapse.storage.engines import create_engine from synapse.storage.roommember import RoomMemberStore from synapse.util.httpresourcetree import create_resource_tree from synapse.util.logcontext import LoggingContext, preserve_fn from synapse.util.manhole import manhole from synapse.util.versionstring import get_version_string from twisted.internet import defer, reactor from twisted.web.resource import Resource logger = logging.getLogger("synapse.app.pusher") class PusherSlaveStore( SlavedEventStore, SlavedPusherStore, SlavedReceiptsStore, SlavedAccountDataStore ): update_pusher_last_stream_ordering_and_success = ( DataStore.update_pusher_last_stream_ordering_and_success.__func__ ) update_pusher_failing_since = ( DataStore.update_pusher_failing_since.__func__ ) update_pusher_last_stream_ordering = ( DataStore.update_pusher_last_stream_ordering.__func__ ) get_throttle_params_by_room = ( DataStore.get_throttle_params_by_room.__func__ ) set_throttle_params = ( DataStore.set_throttle_params.__func__ ) get_time_of_last_push_action_before = ( DataStore.get_time_of_last_push_action_before.__func__ ) get_profile_displayname = ( DataStore.get_profile_displayname.__func__ ) who_forgot_in_room = ( RoomMemberStore.__dict__["who_forgot_in_room"] ) class PusherServer(HomeServer): def get_db_conn(self, run_new_connection=True): # Any param beginning with cp_ is a parameter for adbapi, and should # not be passed to the database engine. db_params = { k: v for k, v in self.db_config.get("args", {}).items() if not k.startswith("cp_") } db_conn = self.database_engine.module.connect(**db_params) if run_new_connection: self.database_engine.on_new_connection(db_conn) return db_conn def setup(self): logger.info("Setting up.") self.datastore = PusherSlaveStore(self.get_db_conn(), self) logger.info("Finished setting up.") def remove_pusher(self, app_id, push_key, user_id): self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id) def _listen_http(self, listener_config): port = listener_config["port"] bind_addresses = listener_config["bind_addresses"] site_tag = listener_config.get("tag", port) resources = {} for res in listener_config["resources"]: for name in res["names"]: if name == "metrics": resources[METRICS_PREFIX] = MetricsResource(self) root_resource = create_resource_tree(resources, Resource()) for address in bind_addresses: reactor.listenTCP( port, SynapseSite( "synapse.access.http.%s" % (site_tag,), site_tag, listener_config, root_resource, ), interface=address ) logger.info("Synapse pusher now listening on port %d", port) def start_listening(self, listeners): for listener in listeners: if listener["type"] == "http": self._listen_http(listener) elif listener["type"] == "manhole": bind_addresses = listener["bind_addresses"] for address in bind_addresses: reactor.listenTCP( listener["port"], manhole( username="matrix", password="rabbithole", globals={"hs": self}, ), interface=address ) else: logger.warn("Unrecognized listener type: %s", listener["type"]) self.get_tcp_replication().start_replication(self) def build_tcp_replication(self): return PusherReplicationHandler(self) class PusherReplicationHandler(ReplicationClientHandler): def __init__(self, hs): super(PusherReplicationHandler, self).__init__(hs.get_datastore()) self.pusher_pool = hs.get_pusherpool() def on_rdata(self, stream_name, token, rows): super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows) preserve_fn(self.poke_pushers)(stream_name, token, rows) @defer.inlineCallbacks def poke_pushers(self, stream_name, token, rows): if stream_name == "pushers": for row in rows: if row.deleted: yield self.stop_pusher(row.user_id, row.app_id, row.pushkey) else: yield self.start_pusher(row.user_id, row.app_id, row.pushkey) elif stream_name == "events": yield self.pusher_pool.on_new_notifications( token, token, ) elif stream_name == "receipts": yield self.pusher_pool.on_new_receipts( token, token, set(row.room_id for row in rows) ) def stop_pusher(self, user_id, app_id, pushkey): key = "%s:%s" % (app_id, pushkey) pushers_for_user = self.pusher_pool.pushers.get(user_id, {}) pusher = pushers_for_user.pop(key, None) if pusher is None: return logger.info("Stopping pusher %r / %r", user_id, key) pusher.on_stop() def start_pusher(self, user_id, app_id, pushkey): key = "%s:%s" % (app_id, pushkey) logger.info("Starting pusher %r / %r", user_id, key) return self.pusher_pool._refresh_pusher(app_id, pushkey, user_id) def start(config_options): try: config = HomeServerConfig.load_config( "Synapse pusher", config_options ) except ConfigError as e: sys.stderr.write("\n" + e.message + "\n") sys.exit(1) assert config.worker_app == "synapse.app.pusher" setup_logging(config, use_worker_options=True) events.USE_FROZEN_DICTS = config.use_frozen_dicts if config.start_pushers: sys.stderr.write( "\nThe pushers must be disabled in the main synapse process" "\nbefore they can be run in a separate worker." "\nPlease add ``start_pushers: false`` to the main config" "\n" ) sys.exit(1) # Force the pushers to start since they will be disabled in the main config config.start_pushers = True database_engine = create_engine(config.database_config) ps = PusherServer( config.server_name, db_config=config.database_config, config=config, version_string="Synapse/" + get_version_string(synapse), database_engine=database_engine, ) ps.setup() ps.start_listening(config.worker_listeners) def start(): ps.get_pusherpool().start() ps.get_datastore().start_profiling() ps.get_state_handler().start_caching() reactor.callWhenRunning(start) _base.start_worker_reactor("synapse-pusher", config) if __name__ == '__main__': with LoggingContext("main"): ps = start(sys.argv[1:]) synapse-0.24.0/synapse/app/synchrotron.py000066400000000000000000000416111317335640100204710ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import contextlib import logging import sys import synapse from synapse.api.constants import EventTypes from synapse.app import _base from synapse.config._base import ConfigError from synapse.config.homeserver import HomeServerConfig from synapse.config.logger import setup_logging from synapse.handlers.presence import PresenceHandler, get_interested_parties from synapse.http.server import JsonResource from synapse.http.site import SynapseSite from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage.account_data import SlavedAccountDataStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.client_ips import SlavedClientIpStore from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore from synapse.replication.slave.storage.devices import SlavedDeviceStore from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.filtering import SlavedFilteringStore from synapse.replication.slave.storage.presence import SlavedPresenceStore from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore from synapse.replication.slave.storage.receipts import SlavedReceiptsStore from synapse.replication.slave.storage.registration import SlavedRegistrationStore from synapse.replication.slave.storage.room import RoomStore from synapse.replication.slave.storage.groups import SlavedGroupServerStore from synapse.replication.tcp.client import ReplicationClientHandler from synapse.rest.client.v1 import events from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet from synapse.rest.client.v1.room import RoomInitialSyncRestServlet from synapse.rest.client.v2_alpha import sync from synapse.server import HomeServer from synapse.storage.engines import create_engine from synapse.storage.presence import UserPresenceState from synapse.storage.roommember import RoomMemberStore from synapse.util.httpresourcetree import create_resource_tree from synapse.util.logcontext import LoggingContext, preserve_fn from synapse.util.manhole import manhole from synapse.util.stringutils import random_string from synapse.util.versionstring import get_version_string from twisted.internet import defer, reactor from twisted.web.resource import Resource logger = logging.getLogger("synapse.app.synchrotron") class SynchrotronSlavedStore( SlavedPushRuleStore, SlavedEventStore, SlavedReceiptsStore, SlavedAccountDataStore, SlavedApplicationServiceStore, SlavedRegistrationStore, SlavedFilteringStore, SlavedPresenceStore, SlavedGroupServerStore, SlavedDeviceInboxStore, SlavedDeviceStore, SlavedClientIpStore, RoomStore, BaseSlavedStore, ): who_forgot_in_room = ( RoomMemberStore.__dict__["who_forgot_in_room"] ) did_forget = ( RoomMemberStore.__dict__["did_forget"] ) UPDATE_SYNCING_USERS_MS = 10 * 1000 class SynchrotronPresence(object): def __init__(self, hs): self.hs = hs self.is_mine_id = hs.is_mine_id self.http_client = hs.get_simple_http_client() self.store = hs.get_datastore() self.user_to_num_current_syncs = {} self.clock = hs.get_clock() self.notifier = hs.get_notifier() active_presence = self.store.take_presence_startup_info() self.user_to_current_state = { state.user_id: state for state in active_presence } # user_id -> last_sync_ms. Lists the users that have stopped syncing # but we haven't notified the master of that yet self.users_going_offline = {} self._send_stop_syncing_loop = self.clock.looping_call( self.send_stop_syncing, 10 * 1000 ) self.process_id = random_string(16) logger.info("Presence process_id is %r", self.process_id) def send_user_sync(self, user_id, is_syncing, last_sync_ms): self.hs.get_tcp_replication().send_user_sync(user_id, is_syncing, last_sync_ms) def mark_as_coming_online(self, user_id): """A user has started syncing. Send a UserSync to the master, unless they had recently stopped syncing. Args: user_id (str) """ going_offline = self.users_going_offline.pop(user_id, None) if not going_offline: # Safe to skip because we haven't yet told the master they were offline self.send_user_sync(user_id, True, self.clock.time_msec()) def mark_as_going_offline(self, user_id): """A user has stopped syncing. We wait before notifying the master as its likely they'll come back soon. This allows us to avoid sending a stopped syncing immediately followed by a started syncing notification to the master Args: user_id (str) """ self.users_going_offline[user_id] = self.clock.time_msec() def send_stop_syncing(self): """Check if there are any users who have stopped syncing a while ago and haven't come back yet. If there are poke the master about them. """ now = self.clock.time_msec() for user_id, last_sync_ms in self.users_going_offline.items(): if now - last_sync_ms > 10 * 1000: self.users_going_offline.pop(user_id, None) self.send_user_sync(user_id, False, last_sync_ms) def set_state(self, user, state, ignore_status_msg=False): # TODO Hows this supposed to work? pass get_states = PresenceHandler.get_states.__func__ get_state = PresenceHandler.get_state.__func__ current_state_for_users = PresenceHandler.current_state_for_users.__func__ def user_syncing(self, user_id, affect_presence): if affect_presence: curr_sync = self.user_to_num_current_syncs.get(user_id, 0) self.user_to_num_current_syncs[user_id] = curr_sync + 1 # If we went from no in flight sync to some, notify replication if self.user_to_num_current_syncs[user_id] == 1: self.mark_as_coming_online(user_id) def _end(): # We check that the user_id is in user_to_num_current_syncs because # user_to_num_current_syncs may have been cleared if we are # shutting down. if affect_presence and user_id in self.user_to_num_current_syncs: self.user_to_num_current_syncs[user_id] -= 1 # If we went from one in flight sync to non, notify replication if self.user_to_num_current_syncs[user_id] == 0: self.mark_as_going_offline(user_id) @contextlib.contextmanager def _user_syncing(): try: yield finally: _end() return defer.succeed(_user_syncing()) @defer.inlineCallbacks def notify_from_replication(self, states, stream_id): parties = yield get_interested_parties(self.store, states) room_ids_to_states, users_to_states = parties self.notifier.on_new_event( "presence_key", stream_id, rooms=room_ids_to_states.keys(), users=users_to_states.keys() ) @defer.inlineCallbacks def process_replication_rows(self, token, rows): states = [UserPresenceState( row.user_id, row.state, row.last_active_ts, row.last_federation_update_ts, row.last_user_sync_ts, row.status_msg, row.currently_active ) for row in rows] for state in states: self.user_to_current_state[row.user_id] = state stream_id = token yield self.notify_from_replication(states, stream_id) def get_currently_syncing_users(self): return [ user_id for user_id, count in self.user_to_num_current_syncs.iteritems() if count > 0 ] class SynchrotronTyping(object): def __init__(self, hs): self._latest_room_serial = 0 self._room_serials = {} self._room_typing = {} def stream_positions(self): # We must update this typing token from the response of the previous # sync. In particular, the stream id may "reset" back to zero/a low # value which we *must* use for the next replication request. return {"typing": self._latest_room_serial} def process_replication_rows(self, token, rows): self._latest_room_serial = token for row in rows: self._room_serials[row.room_id] = token self._room_typing[row.room_id] = row.user_ids class SynchrotronApplicationService(object): def notify_interested_services(self, event): pass class SynchrotronServer(HomeServer): def get_db_conn(self, run_new_connection=True): # Any param beginning with cp_ is a parameter for adbapi, and should # not be passed to the database engine. db_params = { k: v for k, v in self.db_config.get("args", {}).items() if not k.startswith("cp_") } db_conn = self.database_engine.module.connect(**db_params) if run_new_connection: self.database_engine.on_new_connection(db_conn) return db_conn def setup(self): logger.info("Setting up.") self.datastore = SynchrotronSlavedStore(self.get_db_conn(), self) logger.info("Finished setting up.") def _listen_http(self, listener_config): port = listener_config["port"] bind_addresses = listener_config["bind_addresses"] site_tag = listener_config.get("tag", port) resources = {} for res in listener_config["resources"]: for name in res["names"]: if name == "metrics": resources[METRICS_PREFIX] = MetricsResource(self) elif name == "client": resource = JsonResource(self, canonical_json=False) sync.register_servlets(self, resource) events.register_servlets(self, resource) InitialSyncRestServlet(self).register(resource) RoomInitialSyncRestServlet(self).register(resource) resources.update({ "/_matrix/client/r0": resource, "/_matrix/client/unstable": resource, "/_matrix/client/v2_alpha": resource, "/_matrix/client/api/v1": resource, }) root_resource = create_resource_tree(resources, Resource()) for address in bind_addresses: reactor.listenTCP( port, SynapseSite( "synapse.access.http.%s" % (site_tag,), site_tag, listener_config, root_resource, ), interface=address ) logger.info("Synapse synchrotron now listening on port %d", port) def start_listening(self, listeners): for listener in listeners: if listener["type"] == "http": self._listen_http(listener) elif listener["type"] == "manhole": bind_addresses = listener["bind_addresses"] for address in bind_addresses: reactor.listenTCP( listener["port"], manhole( username="matrix", password="rabbithole", globals={"hs": self}, ), interface=address ) else: logger.warn("Unrecognized listener type: %s", listener["type"]) self.get_tcp_replication().start_replication(self) def build_tcp_replication(self): return SyncReplicationHandler(self) def build_presence_handler(self): return SynchrotronPresence(self) def build_typing_handler(self): return SynchrotronTyping(self) class SyncReplicationHandler(ReplicationClientHandler): def __init__(self, hs): super(SyncReplicationHandler, self).__init__(hs.get_datastore()) self.store = hs.get_datastore() self.typing_handler = hs.get_typing_handler() self.presence_handler = hs.get_presence_handler() self.notifier = hs.get_notifier() self.presence_handler.sync_callback = self.send_user_sync def on_rdata(self, stream_name, token, rows): super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows) preserve_fn(self.process_and_notify)(stream_name, token, rows) def get_streams_to_replicate(self): args = super(SyncReplicationHandler, self).get_streams_to_replicate() args.update(self.typing_handler.stream_positions()) return args def get_currently_syncing_users(self): return self.presence_handler.get_currently_syncing_users() @defer.inlineCallbacks def process_and_notify(self, stream_name, token, rows): if stream_name == "events": # We shouldn't get multiple rows per token for events stream, so # we don't need to optimise this for multiple rows. for row in rows: event = yield self.store.get_event(row.event_id) extra_users = () if event.type == EventTypes.Member: extra_users = (event.state_key,) max_token = self.store.get_room_max_stream_ordering() self.notifier.on_new_room_event( event, token, max_token, extra_users ) elif stream_name == "push_rules": self.notifier.on_new_event( "push_rules_key", token, users=[row.user_id for row in rows], ) elif stream_name in ("account_data", "tag_account_data",): self.notifier.on_new_event( "account_data_key", token, users=[row.user_id for row in rows], ) elif stream_name == "receipts": self.notifier.on_new_event( "receipt_key", token, rooms=[row.room_id for row in rows], ) elif stream_name == "typing": self.typing_handler.process_replication_rows(token, rows) self.notifier.on_new_event( "typing_key", token, rooms=[row.room_id for row in rows], ) elif stream_name == "to_device": entities = [row.entity for row in rows if row.entity.startswith("@")] if entities: self.notifier.on_new_event( "to_device_key", token, users=entities, ) elif stream_name == "device_lists": all_room_ids = set() for row in rows: room_ids = yield self.store.get_rooms_for_user(row.user_id) all_room_ids.update(room_ids) self.notifier.on_new_event( "device_list_key", token, rooms=all_room_ids, ) elif stream_name == "presence": yield self.presence_handler.process_replication_rows(token, rows) elif stream_name == "receipts": self.notifier.on_new_event( "groups_key", token, users=[row.user_id for row in rows], ) def start(config_options): try: config = HomeServerConfig.load_config( "Synapse synchrotron", config_options ) except ConfigError as e: sys.stderr.write("\n" + e.message + "\n") sys.exit(1) assert config.worker_app == "synapse.app.synchrotron" setup_logging(config, use_worker_options=True) synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts database_engine = create_engine(config.database_config) ss = SynchrotronServer( config.server_name, db_config=config.database_config, config=config, version_string="Synapse/" + get_version_string(synapse), database_engine=database_engine, application_service_handler=SynchrotronApplicationService(), ) ss.setup() ss.start_listening(config.worker_listeners) def start(): ss.get_datastore().start_profiling() ss.get_state_handler().start_caching() reactor.callWhenRunning(start) _base.start_worker_reactor("synapse-synchrotron", config) if __name__ == '__main__': with LoggingContext("main"): start(sys.argv[1:]) synapse-0.24.0/synapse/app/synctl.py000077500000000000000000000170241317335640100174210ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import collections import glob import os import os.path import signal import subprocess import sys import yaml import errno import time SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"] GREEN = "\x1b[1;32m" YELLOW = "\x1b[1;33m" RED = "\x1b[1;31m" NORMAL = "\x1b[m" def pid_running(pid): try: os.kill(pid, 0) return True except OSError, err: if err.errno == errno.EPERM: return True return False def write(message, colour=NORMAL, stream=sys.stdout): if colour == NORMAL: stream.write(message + "\n") else: stream.write(colour + message + NORMAL + "\n") def abort(message, colour=RED, stream=sys.stderr): write(message, colour, stream) sys.exit(1) def start(configfile): write("Starting ...") args = SYNAPSE args.extend(["--daemonize", "-c", configfile]) try: subprocess.check_call(args) write("started synapse.app.homeserver(%r)" % (configfile,), colour=GREEN) except subprocess.CalledProcessError as e: write( "error starting (exit code: %d); see above for logs" % e.returncode, colour=RED, ) def start_worker(app, configfile, worker_configfile): args = [ "python", "-B", "-m", app, "-c", configfile, "-c", worker_configfile ] try: subprocess.check_call(args) write("started %s(%r)" % (app, worker_configfile), colour=GREEN) except subprocess.CalledProcessError as e: write( "error starting %s(%r) (exit code: %d); see above for logs" % ( app, worker_configfile, e.returncode, ), colour=RED, ) def stop(pidfile, app): if os.path.exists(pidfile): pid = int(open(pidfile).read()) try: os.kill(pid, signal.SIGTERM) write("stopped %s" % (app,), colour=GREEN) except OSError, err: if err.errno == errno.ESRCH: write("%s not running" % (app,), colour=YELLOW) elif err.errno == errno.EPERM: abort("Cannot stop %s: Operation not permitted" % (app,)) else: abort("Cannot stop %s: Unknown error" % (app,)) Worker = collections.namedtuple("Worker", [ "app", "configfile", "pidfile", "cache_factor" ]) def main(): parser = argparse.ArgumentParser() parser.add_argument( "action", choices=["start", "stop", "restart"], help="whether to start, stop or restart the synapse", ) parser.add_argument( "configfile", nargs="?", default="homeserver.yaml", help="the homeserver config file, defaults to homeserver.yaml", ) parser.add_argument( "-w", "--worker", metavar="WORKERCONFIG", help="start or stop a single worker", ) parser.add_argument( "-a", "--all-processes", metavar="WORKERCONFIGDIR", help="start or stop all the workers in the given directory" " and the main synapse process", ) options = parser.parse_args() if options.worker and options.all_processes: write( 'Cannot use "--worker" with "--all-processes"', stream=sys.stderr ) sys.exit(1) configfile = options.configfile if not os.path.exists(configfile): write( "No config file found\n" "To generate a config file, run '%s -c %s --generate-config" " --server-name='\n" % ( " ".join(SYNAPSE), options.configfile ), stream=sys.stderr, ) sys.exit(1) with open(configfile) as stream: config = yaml.load(stream) pidfile = config["pid_file"] cache_factor = config.get("synctl_cache_factor") start_stop_synapse = True if cache_factor: os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor) worker_configfiles = [] if options.worker: start_stop_synapse = False worker_configfile = options.worker if not os.path.exists(worker_configfile): write( "No worker config found at %r" % (worker_configfile,), stream=sys.stderr, ) sys.exit(1) worker_configfiles.append(worker_configfile) if options.all_processes: worker_configdir = options.all_processes if not os.path.isdir(worker_configdir): write( "No worker config directory found at %r" % (worker_configdir,), stream=sys.stderr, ) sys.exit(1) worker_configfiles.extend(sorted(glob.glob( os.path.join(worker_configdir, "*.yaml") ))) workers = [] for worker_configfile in worker_configfiles: with open(worker_configfile) as stream: worker_config = yaml.load(stream) worker_app = worker_config["worker_app"] worker_pidfile = worker_config["worker_pid_file"] worker_daemonize = worker_config["worker_daemonize"] assert worker_daemonize, "In config %r: expected '%s' to be True" % ( worker_configfile, "worker_daemonize") worker_cache_factor = worker_config.get("synctl_cache_factor") workers.append(Worker( worker_app, worker_configfile, worker_pidfile, worker_cache_factor, )) action = options.action if action == "stop" or action == "restart": for worker in workers: stop(worker.pidfile, worker.app) if start_stop_synapse: stop(pidfile, "synapse.app.homeserver") # Wait for synapse to actually shutdown before starting it again if action == "restart": running_pids = [] if start_stop_synapse and os.path.exists(pidfile): running_pids.append(int(open(pidfile).read())) for worker in workers: if os.path.exists(worker.pidfile): running_pids.append(int(open(worker.pidfile).read())) if len(running_pids) > 0: write("Waiting for process to exit before restarting...") for running_pid in running_pids: while pid_running(running_pid): time.sleep(0.2) if action == "start" or action == "restart": if start_stop_synapse: # Check if synapse is already running if os.path.exists(pidfile) and pid_running(int(open(pidfile).read())): abort("synapse.app.homeserver already running") start(configfile) for worker in workers: if worker.cache_factor: os.environ["SYNAPSE_CACHE_FACTOR"] = str(worker.cache_factor) start_worker(worker.app, configfile, worker.configfile) if cache_factor: os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor) else: os.environ.pop("SYNAPSE_CACHE_FACTOR", None) if __name__ == "__main__": main() synapse-0.24.0/synapse/app/user_dir.py000066400000000000000000000213721317335640100177170ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import sys import synapse from synapse import events from synapse.app import _base from synapse.config._base import ConfigError from synapse.config.homeserver import HomeServerConfig from synapse.config.logger import setup_logging from synapse.crypto import context_factory from synapse.http.server import JsonResource from synapse.http.site import SynapseSite from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.client_ips import SlavedClientIpStore from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.registration import SlavedRegistrationStore from synapse.replication.tcp.client import ReplicationClientHandler from synapse.rest.client.v2_alpha import user_directory from synapse.server import HomeServer from synapse.storage.engines import create_engine from synapse.storage.user_directory import UserDirectoryStore from synapse.util.caches.stream_change_cache import StreamChangeCache from synapse.util.httpresourcetree import create_resource_tree from synapse.util.logcontext import LoggingContext, preserve_fn from synapse.util.manhole import manhole from synapse.util.versionstring import get_version_string from twisted.internet import reactor from twisted.web.resource import Resource logger = logging.getLogger("synapse.app.user_dir") class UserDirectorySlaveStore( SlavedEventStore, SlavedApplicationServiceStore, SlavedRegistrationStore, SlavedClientIpStore, UserDirectoryStore, BaseSlavedStore, ): def __init__(self, db_conn, hs): super(UserDirectorySlaveStore, self).__init__(db_conn, hs) events_max = self._stream_id_gen.get_current_token() curr_state_delta_prefill, min_curr_state_delta_id = self._get_cache_dict( db_conn, "current_state_delta_stream", entity_column="room_id", stream_column="stream_id", max_value=events_max, # As we share the stream id with events token limit=1000, ) self._curr_state_delta_stream_cache = StreamChangeCache( "_curr_state_delta_stream_cache", min_curr_state_delta_id, prefilled_cache=curr_state_delta_prefill, ) self._current_state_delta_pos = events_max def stream_positions(self): result = super(UserDirectorySlaveStore, self).stream_positions() result["current_state_deltas"] = self._current_state_delta_pos return result def process_replication_rows(self, stream_name, token, rows): if stream_name == "current_state_deltas": self._current_state_delta_pos = token for row in rows: self._curr_state_delta_stream_cache.entity_has_changed( row.room_id, token ) return super(UserDirectorySlaveStore, self).process_replication_rows( stream_name, token, rows ) class UserDirectoryServer(HomeServer): def get_db_conn(self, run_new_connection=True): # Any param beginning with cp_ is a parameter for adbapi, and should # not be passed to the database engine. db_params = { k: v for k, v in self.db_config.get("args", {}).items() if not k.startswith("cp_") } db_conn = self.database_engine.module.connect(**db_params) if run_new_connection: self.database_engine.on_new_connection(db_conn) return db_conn def setup(self): logger.info("Setting up.") self.datastore = UserDirectorySlaveStore(self.get_db_conn(), self) logger.info("Finished setting up.") def _listen_http(self, listener_config): port = listener_config["port"] bind_addresses = listener_config["bind_addresses"] site_tag = listener_config.get("tag", port) resources = {} for res in listener_config["resources"]: for name in res["names"]: if name == "metrics": resources[METRICS_PREFIX] = MetricsResource(self) elif name == "client": resource = JsonResource(self, canonical_json=False) user_directory.register_servlets(self, resource) resources.update({ "/_matrix/client/r0": resource, "/_matrix/client/unstable": resource, "/_matrix/client/v2_alpha": resource, "/_matrix/client/api/v1": resource, }) root_resource = create_resource_tree(resources, Resource()) for address in bind_addresses: reactor.listenTCP( port, SynapseSite( "synapse.access.http.%s" % (site_tag,), site_tag, listener_config, root_resource, ), interface=address ) logger.info("Synapse user_dir now listening on port %d", port) def start_listening(self, listeners): for listener in listeners: if listener["type"] == "http": self._listen_http(listener) elif listener["type"] == "manhole": bind_addresses = listener["bind_addresses"] for address in bind_addresses: reactor.listenTCP( listener["port"], manhole( username="matrix", password="rabbithole", globals={"hs": self}, ), interface=address ) else: logger.warn("Unrecognized listener type: %s", listener["type"]) self.get_tcp_replication().start_replication(self) def build_tcp_replication(self): return UserDirectoryReplicationHandler(self) class UserDirectoryReplicationHandler(ReplicationClientHandler): def __init__(self, hs): super(UserDirectoryReplicationHandler, self).__init__(hs.get_datastore()) self.user_directory = hs.get_user_directory_handler() def on_rdata(self, stream_name, token, rows): super(UserDirectoryReplicationHandler, self).on_rdata( stream_name, token, rows ) if stream_name == "current_state_deltas": preserve_fn(self.user_directory.notify_new_event)() def start(config_options): try: config = HomeServerConfig.load_config( "Synapse user directory", config_options ) except ConfigError as e: sys.stderr.write("\n" + e.message + "\n") sys.exit(1) assert config.worker_app == "synapse.app.user_dir" setup_logging(config, use_worker_options=True) events.USE_FROZEN_DICTS = config.use_frozen_dicts database_engine = create_engine(config.database_config) if config.update_user_directory: sys.stderr.write( "\nThe update_user_directory must be disabled in the main synapse process" "\nbefore they can be run in a separate worker." "\nPlease add ``update_user_directory: false`` to the main config" "\n" ) sys.exit(1) # Force the pushers to start since they will be disabled in the main config config.update_user_directory = True tls_server_context_factory = context_factory.ServerContextFactory(config) ps = UserDirectoryServer( config.server_name, db_config=config.database_config, tls_server_context_factory=tls_server_context_factory, config=config, version_string="Synapse/" + get_version_string(synapse), database_engine=database_engine, ) ps.setup() ps.start_listening(config.worker_listeners) def start(): ps.get_datastore().start_profiling() ps.get_state_handler().start_caching() reactor.callWhenRunning(start) _base.start_worker_reactor("synapse-user-dir", config) if __name__ == '__main__': with LoggingContext("main"): start(sys.argv[1:]) synapse-0.24.0/synapse/appservice/000077500000000000000000000000001317335640100171055ustar00rootroot00000000000000synapse-0.24.0/synapse/appservice/__init__.py000066400000000000000000000207221317335640100212210ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.api.constants import EventTypes from synapse.util.caches.descriptors import cachedInlineCallbacks from twisted.internet import defer import logging import re logger = logging.getLogger(__name__) class ApplicationServiceState(object): DOWN = "down" UP = "up" class AppServiceTransaction(object): """Represents an application service transaction.""" def __init__(self, service, id, events): self.service = service self.id = id self.events = events def send(self, as_api): """Sends this transaction using the provided AS API interface. Args: as_api(ApplicationServiceApi): The API to use to send. Returns: A Deferred which resolves to True if the transaction was sent. """ return as_api.push_bulk( service=self.service, events=self.events, txn_id=self.id ) def complete(self, store): """Completes this transaction as successful. Marks this transaction ID on the application service and removes the transaction contents from the database. Args: store: The database store to operate on. Returns: A Deferred which resolves to True if the transaction was completed. """ return store.complete_appservice_txn( service=self.service, txn_id=self.id ) class ApplicationService(object): """Defines an application service. This definition is mostly what is provided to the /register AS API. Provides methods to check if this service is "interested" in events. """ NS_USERS = "users" NS_ALIASES = "aliases" NS_ROOMS = "rooms" # The ordering here is important as it is used to map database values (which # are stored as ints representing the position in this list) to namespace # values. NS_LIST = [NS_USERS, NS_ALIASES, NS_ROOMS] def __init__(self, token, url=None, namespaces=None, hs_token=None, sender=None, id=None, protocols=None, rate_limited=True): self.token = token self.url = url self.hs_token = hs_token self.sender = sender self.namespaces = self._check_namespaces(namespaces) self.id = id if "|" in self.id: raise Exception("application service ID cannot contain '|' character") # .protocols is a publicly visible field if protocols: self.protocols = set(protocols) else: self.protocols = set() self.rate_limited = rate_limited def _check_namespaces(self, namespaces): # Sanity check that it is of the form: # { # users: [ {regex: "[A-z]+.*", exclusive: true}, ...], # aliases: [ {regex: "[A-z]+.*", exclusive: true}, ...], # rooms: [ {regex: "[A-z]+.*", exclusive: true}, ...], # } if not namespaces: namespaces = {} for ns in ApplicationService.NS_LIST: if ns not in namespaces: namespaces[ns] = [] continue if type(namespaces[ns]) != list: raise ValueError("Bad namespace value for '%s'" % ns) for regex_obj in namespaces[ns]: if not isinstance(regex_obj, dict): raise ValueError("Expected dict regex for ns '%s'" % ns) if not isinstance(regex_obj.get("exclusive"), bool): raise ValueError( "Expected bool for 'exclusive' in ns '%s'" % ns ) regex = regex_obj.get("regex") if isinstance(regex, basestring): regex_obj["regex"] = re.compile(regex) # Pre-compile regex else: raise ValueError( "Expected string for 'regex' in ns '%s'" % ns ) return namespaces def _matches_regex(self, test_string, namespace_key): for regex_obj in self.namespaces[namespace_key]: if regex_obj["regex"].match(test_string): return regex_obj return None def _is_exclusive(self, ns_key, test_string): regex_obj = self._matches_regex(test_string, ns_key) if regex_obj: return regex_obj["exclusive"] return False @defer.inlineCallbacks def _matches_user(self, event, store): if not event: defer.returnValue(False) if self.is_interested_in_user(event.sender): defer.returnValue(True) # also check m.room.member state key if (event.type == EventTypes.Member and self.is_interested_in_user(event.state_key)): defer.returnValue(True) if not store: defer.returnValue(False) does_match = yield self._matches_user_in_member_list(event.room_id, store) defer.returnValue(does_match) @cachedInlineCallbacks(num_args=1, cache_context=True) def _matches_user_in_member_list(self, room_id, store, cache_context): member_list = yield store.get_users_in_room( room_id, on_invalidate=cache_context.invalidate ) # check joined member events for user_id in member_list: if self.is_interested_in_user(user_id): defer.returnValue(True) defer.returnValue(False) def _matches_room_id(self, event): if hasattr(event, "room_id"): return self.is_interested_in_room(event.room_id) return False @defer.inlineCallbacks def _matches_aliases(self, event, store): if not store or not event: defer.returnValue(False) alias_list = yield store.get_aliases_for_room(event.room_id) for alias in alias_list: if self.is_interested_in_alias(alias): defer.returnValue(True) defer.returnValue(False) @defer.inlineCallbacks def is_interested(self, event, store=None): """Check if this service is interested in this event. Args: event(Event): The event to check. store(DataStore) Returns: bool: True if this service would like to know about this event. """ # Do cheap checks first if self._matches_room_id(event): defer.returnValue(True) if (yield self._matches_aliases(event, store)): defer.returnValue(True) if (yield self._matches_user(event, store)): defer.returnValue(True) defer.returnValue(False) def is_interested_in_user(self, user_id): return ( self._matches_regex(user_id, ApplicationService.NS_USERS) or user_id == self.sender ) def is_interested_in_alias(self, alias): return bool(self._matches_regex(alias, ApplicationService.NS_ALIASES)) def is_interested_in_room(self, room_id): return bool(self._matches_regex(room_id, ApplicationService.NS_ROOMS)) def is_exclusive_user(self, user_id): return ( self._is_exclusive(ApplicationService.NS_USERS, user_id) or user_id == self.sender ) def is_interested_in_protocol(self, protocol): return protocol in self.protocols def is_exclusive_alias(self, alias): return self._is_exclusive(ApplicationService.NS_ALIASES, alias) def is_exclusive_room(self, room_id): return self._is_exclusive(ApplicationService.NS_ROOMS, room_id) def get_exlusive_user_regexes(self): """Get the list of regexes used to determine if a user is exclusively registered by the AS """ return [ regex_obj["regex"] for regex_obj in self.namespaces[ApplicationService.NS_USERS] if regex_obj["exclusive"] ] def is_rate_limited(self): return self.rate_limited def __str__(self): return "ApplicationService: %s" % (self.__dict__,) synapse-0.24.0/synapse/appservice/api.py000066400000000000000000000171271317335640100202400ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.constants import ThirdPartyEntityKind from synapse.api.errors import CodeMessageException from synapse.http.client import SimpleHttpClient from synapse.events.utils import serialize_event from synapse.util.caches.response_cache import ResponseCache from synapse.types import ThirdPartyInstanceID import logging import urllib logger = logging.getLogger(__name__) HOUR_IN_MS = 60 * 60 * 1000 APP_SERVICE_PREFIX = "/_matrix/app/unstable" def _is_valid_3pe_metadata(info): if "instances" not in info: return False if not isinstance(info["instances"], list): return False return True def _is_valid_3pe_result(r, field): if not isinstance(r, dict): return False for k in (field, "protocol"): if k not in r: return False if not isinstance(r[k], str): return False if "fields" not in r: return False fields = r["fields"] if not isinstance(fields, dict): return False for k in fields.keys(): if not isinstance(fields[k], str): return False return True class ApplicationServiceApi(SimpleHttpClient): """This class manages HS -> AS communications, including querying and pushing. """ def __init__(self, hs): super(ApplicationServiceApi, self).__init__(hs) self.clock = hs.get_clock() self.protocol_meta_cache = ResponseCache(hs, timeout_ms=HOUR_IN_MS) @defer.inlineCallbacks def query_user(self, service, user_id): if service.url is None: defer.returnValue(False) uri = service.url + ("/users/%s" % urllib.quote(user_id)) response = None try: response = yield self.get_json(uri, { "access_token": service.hs_token }) if response is not None: # just an empty json object defer.returnValue(True) except CodeMessageException as e: if e.code == 404: defer.returnValue(False) return logger.warning("query_user to %s received %s", uri, e.code) except Exception as ex: logger.warning("query_user to %s threw exception %s", uri, ex) defer.returnValue(False) @defer.inlineCallbacks def query_alias(self, service, alias): if service.url is None: defer.returnValue(False) uri = service.url + ("/rooms/%s" % urllib.quote(alias)) response = None try: response = yield self.get_json(uri, { "access_token": service.hs_token }) if response is not None: # just an empty json object defer.returnValue(True) except CodeMessageException as e: logger.warning("query_alias to %s received %s", uri, e.code) if e.code == 404: defer.returnValue(False) return except Exception as ex: logger.warning("query_alias to %s threw exception %s", uri, ex) defer.returnValue(False) @defer.inlineCallbacks def query_3pe(self, service, kind, protocol, fields): if kind == ThirdPartyEntityKind.USER: required_field = "userid" elif kind == ThirdPartyEntityKind.LOCATION: required_field = "alias" else: raise ValueError( "Unrecognised 'kind' argument %r to query_3pe()", kind ) if service.url is None: defer.returnValue([]) uri = "%s%s/thirdparty/%s/%s" % ( service.url, APP_SERVICE_PREFIX, kind, urllib.quote(protocol) ) try: response = yield self.get_json(uri, fields) if not isinstance(response, list): logger.warning( "query_3pe to %s returned an invalid response %r", uri, response ) defer.returnValue([]) ret = [] for r in response: if _is_valid_3pe_result(r, field=required_field): ret.append(r) else: logger.warning( "query_3pe to %s returned an invalid result %r", uri, r ) defer.returnValue(ret) except Exception as ex: logger.warning("query_3pe to %s threw exception %s", uri, ex) defer.returnValue([]) def get_3pe_protocol(self, service, protocol): if service.url is None: defer.returnValue({}) @defer.inlineCallbacks def _get(): uri = "%s%s/thirdparty/protocol/%s" % ( service.url, APP_SERVICE_PREFIX, urllib.quote(protocol) ) try: info = yield self.get_json(uri, {}) if not _is_valid_3pe_metadata(info): logger.warning("query_3pe_protocol to %s did not return a" " valid result", uri) defer.returnValue(None) for instance in info.get("instances", []): network_id = instance.get("network_id", None) if network_id is not None: instance["instance_id"] = ThirdPartyInstanceID( service.id, network_id, ).to_string() defer.returnValue(info) except Exception as ex: logger.warning("query_3pe_protocol to %s threw exception %s", uri, ex) defer.returnValue(None) key = (service.id, protocol) return self.protocol_meta_cache.get(key) or ( self.protocol_meta_cache.set(key, _get()) ) @defer.inlineCallbacks def push_bulk(self, service, events, txn_id=None): if service.url is None: defer.returnValue(True) events = self._serialize(events) if txn_id is None: logger.warning("push_bulk: Missing txn ID sending events to %s", service.url) txn_id = str(0) txn_id = str(txn_id) uri = service.url + ("/transactions/%s" % urllib.quote(txn_id)) try: yield self.put_json( uri=uri, json_body={ "events": events }, args={ "access_token": service.hs_token }) defer.returnValue(True) return except CodeMessageException as e: logger.warning("push_bulk to %s received %s", uri, e.code) except Exception as ex: logger.warning("push_bulk to %s threw exception %s", uri, ex) defer.returnValue(False) def _serialize(self, events): time_now = self.clock.time_msec() return [ serialize_event(e, time_now, as_client_event=True) for e in events ] synapse-0.24.0/synapse/appservice/scheduler.py000066400000000000000000000217741317335640100214500ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This module controls the reliability for application service transactions. The nominal flow through this module looks like: __________ 1---ASa[e]-->| Service |--> Queue ASa[f] 2----ASb[e]->| Queuer | 3--ASa[f]--->|__________|-----------+ ASa[e], ASb[e] V -````````- +------------+ |````````|<--StoreTxn-|Transaction | |Database| | Controller |---> SEND TO AS `--------` +------------+ What happens on SEND TO AS depends on the state of the Application Service: - If the AS is marked as DOWN, do nothing. - If the AS is marked as UP, send the transaction. * SUCCESS : Increment where the AS is up to txn-wise and nuke the txn contents from the db. * FAILURE : Marked AS as DOWN and start Recoverer. Recoverer attempts to recover ASes who have died. The flow for this looks like: ,--------------------- backoff++ --------------. V | START ---> Wait exp ------> Get oldest txn ID from ----> FAILURE backoff DB and try to send it ^ |___________ Mark AS as | V UP & quit +---------- YES SUCCESS | | | NO <--- Have more txns? <------ Mark txn success & nuke <-+ from db; incr AS pos. Reset backoff. This is all tied together by the AppServiceScheduler which DIs the required components. """ from twisted.internet import defer from synapse.appservice import ApplicationServiceState from synapse.util.logcontext import preserve_fn from synapse.util.metrics import Measure import logging logger = logging.getLogger(__name__) class ApplicationServiceScheduler(object): """ Public facing API for this module. Does the required DI to tie the components together. This also serves as the "event_pool", which in this case is a simple array. """ def __init__(self, hs): self.clock = hs.get_clock() self.store = hs.get_datastore() self.as_api = hs.get_application_service_api() def create_recoverer(service, callback): return _Recoverer(self.clock, self.store, self.as_api, service, callback) self.txn_ctrl = _TransactionController( self.clock, self.store, self.as_api, create_recoverer ) self.queuer = _ServiceQueuer(self.txn_ctrl, self.clock) @defer.inlineCallbacks def start(self): logger.info("Starting appservice scheduler") # check for any DOWN ASes and start recoverers for them. recoverers = yield _Recoverer.start( self.clock, self.store, self.as_api, self.txn_ctrl.on_recovered ) self.txn_ctrl.add_recoverers(recoverers) def submit_event_for_as(self, service, event): self.queuer.enqueue(service, event) class _ServiceQueuer(object): """Queues events for the same application service together, sending transactions as soon as possible. Once a transaction is sent successfully, this schedules any other events in the queue to run. """ def __init__(self, txn_ctrl, clock): self.queued_events = {} # dict of {service_id: [events]} self.requests_in_flight = set() self.txn_ctrl = txn_ctrl self.clock = clock def enqueue(self, service, event): # if this service isn't being sent something self.queued_events.setdefault(service.id, []).append(event) preserve_fn(self._send_request)(service) @defer.inlineCallbacks def _send_request(self, service): if service.id in self.requests_in_flight: return self.requests_in_flight.add(service.id) try: while True: events = self.queued_events.pop(service.id, []) if not events: return with Measure(self.clock, "servicequeuer.send"): try: yield self.txn_ctrl.send(service, events) except: logger.exception("AS request failed") finally: self.requests_in_flight.discard(service.id) class _TransactionController(object): def __init__(self, clock, store, as_api, recoverer_fn): self.clock = clock self.store = store self.as_api = as_api self.recoverer_fn = recoverer_fn # keep track of how many recoverers there are self.recoverers = [] @defer.inlineCallbacks def send(self, service, events): try: txn = yield self.store.create_appservice_txn( service=service, events=events ) service_is_up = yield self._is_service_up(service) if service_is_up: sent = yield txn.send(self.as_api) if sent: yield txn.complete(self.store) else: preserve_fn(self._start_recoverer)(service) except Exception as e: logger.exception(e) preserve_fn(self._start_recoverer)(service) @defer.inlineCallbacks def on_recovered(self, recoverer): self.recoverers.remove(recoverer) logger.info("Successfully recovered application service AS ID %s", recoverer.service.id) logger.info("Remaining active recoverers: %s", len(self.recoverers)) yield self.store.set_appservice_state( recoverer.service, ApplicationServiceState.UP ) def add_recoverers(self, recoverers): for r in recoverers: self.recoverers.append(r) if len(recoverers) > 0: logger.info("New active recoverers: %s", len(self.recoverers)) @defer.inlineCallbacks def _start_recoverer(self, service): yield self.store.set_appservice_state( service, ApplicationServiceState.DOWN ) logger.info( "Application service falling behind. Starting recoverer. AS ID %s", service.id ) recoverer = self.recoverer_fn(service, self.on_recovered) self.add_recoverers([recoverer]) recoverer.recover() @defer.inlineCallbacks def _is_service_up(self, service): state = yield self.store.get_appservice_state(service) defer.returnValue(state == ApplicationServiceState.UP or state is None) class _Recoverer(object): @staticmethod @defer.inlineCallbacks def start(clock, store, as_api, callback): services = yield store.get_appservices_by_state( ApplicationServiceState.DOWN ) recoverers = [ _Recoverer(clock, store, as_api, s, callback) for s in services ] for r in recoverers: logger.info("Starting recoverer for AS ID %s which was marked as " "DOWN", r.service.id) r.recover() defer.returnValue(recoverers) def __init__(self, clock, store, as_api, service, callback): self.clock = clock self.store = store self.as_api = as_api self.service = service self.callback = callback self.backoff_counter = 1 def recover(self): self.clock.call_later((2 ** self.backoff_counter), self.retry) def _backoff(self): # cap the backoff to be around 8.5min => (2^9) = 512 secs if self.backoff_counter < 9: self.backoff_counter += 1 self.recover() @defer.inlineCallbacks def retry(self): try: txn = yield self.store.get_oldest_unsent_txn(self.service) if txn: logger.info("Retrying transaction %s for AS ID %s", txn.id, txn.service.id) sent = yield txn.send(self.as_api) if sent: yield txn.complete(self.store) # reset the backoff counter and retry immediately self.backoff_counter = 1 yield self.retry() else: self._backoff() else: self._set_service_recovered() except Exception as e: logger.exception(e) self._backoff() def _set_service_recovered(self): self.callback(self) synapse-0.24.0/synapse/config/000077500000000000000000000000001317335640100162115ustar00rootroot00000000000000synapse-0.24.0/synapse/config/__init__.py000066400000000000000000000011371317335640100203240ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. synapse-0.24.0/synapse/config/__main__.py000066400000000000000000000022071317335640100203040ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.config._base import ConfigError if __name__ == "__main__": import sys from homeserver import HomeServerConfig action = sys.argv[1] if action == "read": key = sys.argv[2] try: config = HomeServerConfig.load_config("", sys.argv[3:]) except ConfigError as e: sys.stderr.write("\n" + e.message + "\n") sys.exit(1) print (getattr(config, key)) sys.exit(0) else: sys.stderr.write("Unknown command %r\n" % (action,)) sys.exit(1) synapse-0.24.0/synapse/config/_base.py000066400000000000000000000337331317335640100176450ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import errno import os import yaml from textwrap import dedent class ConfigError(Exception): pass # We split these messages out to allow packages to override with package # specific instructions. MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS = """\ Please opt in or out of reporting anonymized homeserver usage statistics, by setting the `report_stats` key in your config file to either True or False. """ MISSING_REPORT_STATS_SPIEL = """\ We would really appreciate it if you could help our project out by reporting anonymized usage statistics from your homeserver. Only very basic aggregate data (e.g. number of users) will be reported, but it helps us to track the growth of the Matrix community, and helps us to make Matrix a success, as well as to convince other networks that they should peer with us. Thank you. """ MISSING_SERVER_NAME = """\ Missing mandatory `server_name` config option. """ class Config(object): @staticmethod def parse_size(value): if isinstance(value, int) or isinstance(value, long): return value sizes = {"K": 1024, "M": 1024 * 1024} size = 1 suffix = value[-1] if suffix in sizes: value = value[:-1] size = sizes[suffix] return int(value) * size @staticmethod def parse_duration(value): if isinstance(value, int) or isinstance(value, long): return value second = 1000 minute = 60 * second hour = 60 * minute day = 24 * hour week = 7 * day year = 365 * day sizes = {"s": second, "m": minute, "h": hour, "d": day, "w": week, "y": year} size = 1 suffix = value[-1] if suffix in sizes: value = value[:-1] size = sizes[suffix] return int(value) * size @staticmethod def abspath(file_path): return os.path.abspath(file_path) if file_path else file_path @classmethod def path_exists(cls, file_path): """Check if a file exists Unlike os.path.exists, this throws an exception if there is an error checking if the file exists (for example, if there is a perms error on the parent dir). Returns: bool: True if the file exists; False if not. """ try: os.stat(file_path) return True except OSError as e: if e.errno != errno.ENOENT: raise e return False @classmethod def check_file(cls, file_path, config_name): if file_path is None: raise ConfigError( "Missing config for %s." % (config_name,) ) try: os.stat(file_path) except OSError as e: raise ConfigError( "Error accessing file '%s' (config for %s): %s" % (file_path, config_name, e.strerror) ) return cls.abspath(file_path) @classmethod def ensure_directory(cls, dir_path): dir_path = cls.abspath(dir_path) try: os.makedirs(dir_path) except OSError as e: if e.errno != errno.EEXIST: raise if not os.path.isdir(dir_path): raise ConfigError( "%s is not a directory" % (dir_path,) ) return dir_path @classmethod def read_file(cls, file_path, config_name): cls.check_file(file_path, config_name) with open(file_path) as file_stream: return file_stream.read() @staticmethod def default_path(name): return os.path.abspath(os.path.join(os.path.curdir, name)) @staticmethod def read_config_file(file_path): with open(file_path) as file_stream: return yaml.load(file_stream) def invoke_all(self, name, *args, **kargs): results = [] for cls in type(self).mro(): if name in cls.__dict__: results.append(getattr(cls, name)(self, *args, **kargs)) return results def generate_config( self, config_dir_path, server_name, is_generating_file, report_stats=None, ): default_config = "# vim:ft=yaml\n" default_config += "\n\n".join(dedent(conf) for conf in self.invoke_all( "default_config", config_dir_path=config_dir_path, server_name=server_name, is_generating_file=is_generating_file, report_stats=report_stats, )) config = yaml.load(default_config) return default_config, config @classmethod def load_config(cls, description, argv): config_parser = argparse.ArgumentParser( description=description, ) config_parser.add_argument( "-c", "--config-path", action="append", metavar="CONFIG_FILE", help="Specify config file. Can be given multiple times and" " may specify directories containing *.yaml files." ) config_parser.add_argument( "--keys-directory", metavar="DIRECTORY", help="Where files such as certs and signing keys are stored when" " their location is given explicitly in the config." " Defaults to the directory containing the last config file", ) config_args = config_parser.parse_args(argv) config_files = find_config_files(search_paths=config_args.config_path) obj = cls() obj.read_config_files( config_files, keys_directory=config_args.keys_directory, generate_keys=False, ) return obj @classmethod def load_or_generate_config(cls, description, argv): config_parser = argparse.ArgumentParser(add_help=False) config_parser.add_argument( "-c", "--config-path", action="append", metavar="CONFIG_FILE", help="Specify config file. Can be given multiple times and" " may specify directories containing *.yaml files." ) config_parser.add_argument( "--generate-config", action="store_true", help="Generate a config file for the server name" ) config_parser.add_argument( "--report-stats", action="store", help="Whether the generated config reports anonymized usage statistics", choices=["yes", "no"] ) config_parser.add_argument( "--generate-keys", action="store_true", help="Generate any missing key files then exit" ) config_parser.add_argument( "--keys-directory", metavar="DIRECTORY", help="Used with 'generate-*' options to specify where files such as" " certs and signing keys should be stored in, unless explicitly" " specified in the config." ) config_parser.add_argument( "-H", "--server-name", help="The server name to generate a config file for" ) config_args, remaining_args = config_parser.parse_known_args(argv) config_files = find_config_files(search_paths=config_args.config_path) generate_keys = config_args.generate_keys obj = cls() if config_args.generate_config: if config_args.report_stats is None: config_parser.error( "Please specify either --report-stats=yes or --report-stats=no\n\n" + MISSING_REPORT_STATS_SPIEL ) if not config_files: config_parser.error( "Must supply a config file.\nA config file can be automatically" " generated using \"--generate-config -H SERVER_NAME" " -c CONFIG-FILE\"" ) (config_path,) = config_files if not cls.path_exists(config_path): if config_args.keys_directory: config_dir_path = config_args.keys_directory else: config_dir_path = os.path.dirname(config_path) config_dir_path = os.path.abspath(config_dir_path) server_name = config_args.server_name if not server_name: raise ConfigError( "Must specify a server_name to a generate config for." " Pass -H server.name." ) if not cls.path_exists(config_dir_path): os.makedirs(config_dir_path) with open(config_path, "wb") as config_file: config_bytes, config = obj.generate_config( config_dir_path=config_dir_path, server_name=server_name, report_stats=(config_args.report_stats == "yes"), is_generating_file=True ) obj.invoke_all("generate_files", config) config_file.write(config_bytes) print ( "A config file has been generated in %r for server name" " %r with corresponding SSL keys and self-signed" " certificates. Please review this file and customise it" " to your needs." ) % (config_path, server_name) print ( "If this server name is incorrect, you will need to" " regenerate the SSL certificates" ) return else: print ( "Config file %r already exists. Generating any missing key" " files." ) % (config_path,) generate_keys = True parser = argparse.ArgumentParser( parents=[config_parser], description=description, formatter_class=argparse.RawDescriptionHelpFormatter, ) obj.invoke_all("add_arguments", parser) args = parser.parse_args(remaining_args) if not config_files: config_parser.error( "Must supply a config file.\nA config file can be automatically" " generated using \"--generate-config -H SERVER_NAME" " -c CONFIG-FILE\"" ) obj.read_config_files( config_files, keys_directory=config_args.keys_directory, generate_keys=generate_keys, ) if generate_keys: return None obj.invoke_all("read_arguments", args) return obj def read_config_files(self, config_files, keys_directory=None, generate_keys=False): if not keys_directory: keys_directory = os.path.dirname(config_files[-1]) config_dir_path = os.path.abspath(keys_directory) specified_config = {} for config_file in config_files: yaml_config = self.read_config_file(config_file) specified_config.update(yaml_config) if "server_name" not in specified_config: raise ConfigError(MISSING_SERVER_NAME) server_name = specified_config["server_name"] _, config = self.generate_config( config_dir_path=config_dir_path, server_name=server_name, is_generating_file=False, ) config.pop("log_config") config.update(specified_config) if "report_stats" not in config: raise ConfigError( MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS + "\n" + MISSING_REPORT_STATS_SPIEL ) if generate_keys: self.invoke_all("generate_files", config) return self.invoke_all("read_config", config) def find_config_files(search_paths): """Finds config files using a list of search paths. If a path is a file then that file path is added to the list. If a search path is a directory then all the "*.yaml" files in that directory are added to the list in sorted order. Args: search_paths(list(str)): A list of paths to search. Returns: list(str): A list of file paths. """ config_files = [] if search_paths: for config_path in search_paths: if os.path.isdir(config_path): # We accept specifying directories as config paths, we search # inside that directory for all files matching *.yaml, and then # we apply them in *sorted* order. files = [] for entry in os.listdir(config_path): entry_path = os.path.join(config_path, entry) if not os.path.isfile(entry_path): print ( "Found subdirectory in config directory: %r. IGNORING." ) % (entry_path, ) continue if not entry.endswith(".yaml"): print ( "Found file in config directory that does not" " end in '.yaml': %r. IGNORING." ) % (entry_path, ) continue files.append(entry_path) config_files.extend(sorted(files)) else: config_files.append(config_path) return config_files synapse-0.24.0/synapse/config/api.py000066400000000000000000000024301317335640100173330ustar00rootroot00000000000000# Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config from synapse.api.constants import EventTypes class ApiConfig(Config): def read_config(self, config): self.room_invite_state_types = config.get("room_invite_state_types", [ EventTypes.JoinRules, EventTypes.CanonicalAlias, EventTypes.RoomAvatar, EventTypes.Name, ]) def default_config(cls, **kwargs): return """\ ## API Configuration ## # A list of event types that will be included in the room_invite_state room_invite_state_types: - "{JoinRules}" - "{CanonicalAlias}" - "{RoomAvatar}" - "{Name}" """.format(**vars(EventTypes)) synapse-0.24.0/synapse/config/appservice.py000066400000000000000000000137461317335640100207370ustar00rootroot00000000000000# Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config, ConfigError from synapse.appservice import ApplicationService from synapse.types import UserID import urllib import yaml import logging logger = logging.getLogger(__name__) class AppServiceConfig(Config): def read_config(self, config): self.app_service_config_files = config.get("app_service_config_files", []) self.notify_appservices = config.get("notify_appservices", True) def default_config(cls, **kwargs): return """\ # A list of application service config file to use app_service_config_files: [] """ def load_appservices(hostname, config_files): """Returns a list of Application Services from the config files.""" if not isinstance(config_files, list): logger.warning( "Expected %s to be a list of AS config files.", config_files ) return [] # Dicts of value -> filename seen_as_tokens = {} seen_ids = {} appservices = [] for config_file in config_files: try: with open(config_file, 'r') as f: appservice = _load_appservice( hostname, yaml.load(f), config_file ) if appservice.id in seen_ids: raise ConfigError( "Cannot reuse ID across application services: " "%s (files: %s, %s)" % ( appservice.id, config_file, seen_ids[appservice.id], ) ) seen_ids[appservice.id] = config_file if appservice.token in seen_as_tokens: raise ConfigError( "Cannot reuse as_token across application services: " "%s (files: %s, %s)" % ( appservice.token, config_file, seen_as_tokens[appservice.token], ) ) seen_as_tokens[appservice.token] = config_file logger.info("Loaded application service: %s", appservice) appservices.append(appservice) except Exception as e: logger.error("Failed to load appservice from '%s'", config_file) logger.exception(e) raise return appservices def _load_appservice(hostname, as_info, config_filename): required_string_fields = [ "id", "as_token", "hs_token", "sender_localpart" ] for field in required_string_fields: if not isinstance(as_info.get(field), basestring): raise KeyError("Required string field: '%s' (%s)" % ( field, config_filename, )) # 'url' must either be a string or explicitly null, not missing # to avoid accidentally turning off push for ASes. if (not isinstance(as_info.get("url"), basestring) and as_info.get("url", "") is not None): raise KeyError( "Required string field or explicit null: 'url' (%s)" % (config_filename,) ) localpart = as_info["sender_localpart"] if urllib.quote(localpart) != localpart: raise ValueError( "sender_localpart needs characters which are not URL encoded." ) user = UserID(localpart, hostname) user_id = user.to_string() # Rate limiting for users of this AS is on by default (excludes sender) rate_limited = True if isinstance(as_info.get("rate_limited"), bool): rate_limited = as_info.get("rate_limited") # namespace checks if not isinstance(as_info.get("namespaces"), dict): raise KeyError("Requires 'namespaces' object.") for ns in ApplicationService.NS_LIST: # specific namespaces are optional if ns in as_info["namespaces"]: # expect a list of dicts with exclusive and regex keys for regex_obj in as_info["namespaces"][ns]: if not isinstance(regex_obj, dict): raise ValueError( "Expected namespace entry in %s to be an object," " but got %s", ns, regex_obj ) if not isinstance(regex_obj.get("regex"), basestring): raise ValueError( "Missing/bad type 'regex' key in %s", regex_obj ) if not isinstance(regex_obj.get("exclusive"), bool): raise ValueError( "Missing/bad type 'exclusive' key in %s", regex_obj ) # protocols check protocols = as_info.get("protocols") if protocols: # Because strings are lists in python if isinstance(protocols, str) or not isinstance(protocols, list): raise KeyError("Optional 'protocols' must be a list if present.") for p in protocols: if not isinstance(p, str): raise KeyError("Bad value for 'protocols' item") if as_info["url"] is None: logger.info( "(%s) Explicitly empty 'url' provided. This application service" " will not receive events or queries.", config_filename, ) return ApplicationService( token=as_info["as_token"], url=as_info["url"], namespaces=as_info["namespaces"], hs_token=as_info["hs_token"], sender=user_id, id=as_info["id"], protocols=protocols, rate_limited=rate_limited ) synapse-0.24.0/synapse/config/captcha.py000066400000000000000000000035561317335640100201770ustar00rootroot00000000000000# Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config class CaptchaConfig(Config): def read_config(self, config): self.recaptcha_private_key = config["recaptcha_private_key"] self.recaptcha_public_key = config["recaptcha_public_key"] self.enable_registration_captcha = config["enable_registration_captcha"] self.captcha_bypass_secret = config.get("captcha_bypass_secret") self.recaptcha_siteverify_api = config["recaptcha_siteverify_api"] def default_config(self, **kwargs): return """\ ## Captcha ## # See docs/CAPTCHA_SETUP for full details of configuring this. # This Home Server's ReCAPTCHA public key. recaptcha_public_key: "YOUR_PUBLIC_KEY" # This Home Server's ReCAPTCHA private key. recaptcha_private_key: "YOUR_PRIVATE_KEY" # Enables ReCaptcha checks when registering, preventing signup # unless a captcha is answered. Requires a valid ReCaptcha # public/private key. enable_registration_captcha: False # A secret key used to bypass the captcha test entirely. #captcha_bypass_secret: "YOUR_SECRET_HERE" # The API endpoint to use for verifying m.login.recaptcha responses. recaptcha_siteverify_api: "https://www.google.com/recaptcha/api/siteverify" """ synapse-0.24.0/synapse/config/cas.py000066400000000000000000000031741317335640100173360ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config class CasConfig(Config): """Cas Configuration cas_server_url: URL of CAS server """ def read_config(self, config): cas_config = config.get("cas_config", None) if cas_config: self.cas_enabled = cas_config.get("enabled", True) self.cas_server_url = cas_config["server_url"] self.cas_service_url = cas_config["service_url"] self.cas_required_attributes = cas_config.get("required_attributes", {}) else: self.cas_enabled = False self.cas_server_url = None self.cas_service_url = None self.cas_required_attributes = {} def default_config(self, config_dir_path, server_name, **kwargs): return """ # Enable CAS for registration and login. #cas_config: # enabled: true # server_url: "https://cas-server.com" # service_url: "https://homesever.domain.com:8448" # #required_attributes: # # name: value """ synapse-0.24.0/synapse/config/database.py000066400000000000000000000051051317335640100203300ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config class DatabaseConfig(Config): def read_config(self, config): self.event_cache_size = self.parse_size( config.get("event_cache_size", "10K") ) self.database_config = config.get("database") if self.database_config is None: self.database_config = { "name": "sqlite3", "args": {}, } name = self.database_config.get("name", None) if name == "psycopg2": pass elif name == "sqlite3": self.database_config.setdefault("args", {}).update({ "cp_min": 1, "cp_max": 1, "check_same_thread": False, }) else: raise RuntimeError("Unsupported database type '%s'" % (name,)) self.set_databasepath(config.get("database_path")) def default_config(self, **kwargs): database_path = self.abspath("homeserver.db") return """\ # Database configuration database: # The database engine name name: "sqlite3" # Arguments to pass to the engine args: # Path to the database database: "%(database_path)s" # Number of events to cache in memory. event_cache_size: "10K" """ % locals() def read_arguments(self, args): self.set_databasepath(args.database_path) def set_databasepath(self, database_path): if database_path != ":memory:": database_path = self.abspath(database_path) if self.database_config.get("name", None) == "sqlite3": if database_path is not None: self.database_config["args"]["database"] = database_path def add_arguments(self, parser): db_group = parser.add_argument_group("database") db_group.add_argument( "-d", "--database-path", metavar="SQLITE_DATABASE_PATH", help="The path to a sqlite database to use." ) synapse-0.24.0/synapse/config/emailconfig.py000066400000000000000000000107431317335640100210450ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # This file can't be called email.py because if it is, we cannot: import email.utils from ._base import Config class EmailConfig(Config): def read_config(self, config): self.email_enable_notifs = False email_config = config.get("email", {}) self.email_enable_notifs = email_config.get("enable_notifs", False) if self.email_enable_notifs: # make sure we can import the required deps import jinja2 import bleach # prevent unused warnings jinja2 bleach required = [ "smtp_host", "smtp_port", "notif_from", "template_dir", "notif_template_html", "notif_template_text", ] missing = [] for k in required: if k not in email_config: missing.append(k) if (len(missing) > 0): raise RuntimeError( "email.enable_notifs is True but required keys are missing: %s" % (", ".join(["email." + k for k in missing]),) ) if config.get("public_baseurl") is None: raise RuntimeError( "email.enable_notifs is True but no public_baseurl is set" ) self.email_smtp_host = email_config["smtp_host"] self.email_smtp_port = email_config["smtp_port"] self.email_notif_from = email_config["notif_from"] self.email_template_dir = email_config["template_dir"] self.email_notif_template_html = email_config["notif_template_html"] self.email_notif_template_text = email_config["notif_template_text"] self.email_notif_for_new_users = email_config.get( "notif_for_new_users", True ) self.email_riot_base_url = email_config.get( "riot_base_url", None ) self.email_smtp_user = email_config.get( "smtp_user", None ) self.email_smtp_pass = email_config.get( "smtp_pass", None ) self.require_transport_security = email_config.get( "require_transport_security", False ) if "app_name" in email_config: self.email_app_name = email_config["app_name"] else: self.email_app_name = "Matrix" # make sure it's valid parsed = email.utils.parseaddr(self.email_notif_from) if parsed[1] == '': raise RuntimeError("Invalid notif_from address") else: self.email_enable_notifs = False # Not much point setting defaults for the rest: it would be an # error for them to be used. def default_config(self, config_dir_path, server_name, **kwargs): return """ # Enable sending emails for notification events # Defining a custom URL for Riot is only needed if email notifications # should contain links to a self-hosted installation of Riot; when set # the "app_name" setting is ignored. # # If your SMTP server requires authentication, the optional smtp_user & # smtp_pass variables should be used # #email: # enable_notifs: false # smtp_host: "localhost" # smtp_port: 25 # smtp_user: "exampleusername" # smtp_pass: "examplepassword" # require_transport_security: False # notif_from: "Your Friendly %(app)s Home Server " # app_name: Matrix # template_dir: res/templates # notif_template_html: notif_mail.html # notif_template_text: notif_mail.txt # notif_for_new_users: True # riot_base_url: "http://localhost/riot" """ synapse-0.24.0/synapse/config/groups.py000066400000000000000000000022621317335640100201040ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config class GroupsConfig(Config): def read_config(self, config): self.enable_group_creation = config.get("enable_group_creation", False) self.group_creation_prefix = config.get("group_creation_prefix", "") def default_config(self, **kwargs): return """\ # Whether to allow non server admins to create groups on this server enable_group_creation: false # If enabled, non server admins can only create groups with local parts # starting with this prefix # group_creation_prefix: "unofficial/" """ synapse-0.24.0/synapse/config/homeserver.py000066400000000000000000000040641317335640100207460ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from .tls import TlsConfig from .server import ServerConfig from .logger import LoggingConfig from .database import DatabaseConfig from .ratelimiting import RatelimitConfig from .repository import ContentRepositoryConfig from .captcha import CaptchaConfig from .voip import VoipConfig from .registration import RegistrationConfig from .metrics import MetricsConfig from .api import ApiConfig from .appservice import AppServiceConfig from .key import KeyConfig from .saml2 import SAML2Config from .cas import CasConfig from .password import PasswordConfig from .jwt import JWTConfig from .password_auth_providers import PasswordAuthProviderConfig from .emailconfig import EmailConfig from .workers import WorkerConfig from .push import PushConfig from .spam_checker import SpamCheckerConfig from .groups import GroupsConfig class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig, RatelimitConfig, ContentRepositoryConfig, CaptchaConfig, VoipConfig, RegistrationConfig, MetricsConfig, ApiConfig, AppServiceConfig, KeyConfig, SAML2Config, CasConfig, JWTConfig, PasswordConfig, EmailConfig, WorkerConfig, PasswordAuthProviderConfig, PushConfig, SpamCheckerConfig, GroupsConfig,): pass if __name__ == '__main__': import sys sys.stdout.write( HomeServerConfig().generate_config(sys.argv[1], sys.argv[2])[0] ) synapse-0.24.0/synapse/config/jwt.py000066400000000000000000000031731317335640100173730ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Niklas Riekenbrauck # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config, ConfigError MISSING_JWT = ( """Missing jwt library. This is required for jwt login. Install by running: pip install pyjwt """ ) class JWTConfig(Config): def read_config(self, config): jwt_config = config.get("jwt_config", None) if jwt_config: self.jwt_enabled = jwt_config.get("enabled", False) self.jwt_secret = jwt_config["secret"] self.jwt_algorithm = jwt_config["algorithm"] try: import jwt jwt # To stop unused lint. except ImportError: raise ConfigError(MISSING_JWT) else: self.jwt_enabled = False self.jwt_secret = None self.jwt_algorithm = None def default_config(self, **kwargs): return """\ # The JWT needs to contain a globally unique "sub" (subject) claim. # # jwt_config: # enabled: true # secret: "a secret" # algorithm: "HS256" """ synapse-0.24.0/synapse/config/key.py000066400000000000000000000141061317335640100173550ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config, ConfigError from synapse.util.stringutils import random_string from signedjson.key import ( generate_signing_key, is_signing_algorithm_supported, decode_signing_key_base64, decode_verify_key_bytes, read_signing_keys, write_signing_keys, NACL_ED25519 ) from unpaddedbase64 import decode_base64 from synapse.util.stringutils import random_string_with_symbols import os import hashlib import logging logger = logging.getLogger(__name__) class KeyConfig(Config): def read_config(self, config): self.signing_key = self.read_signing_key(config["signing_key_path"]) self.old_signing_keys = self.read_old_signing_keys( config["old_signing_keys"] ) self.key_refresh_interval = self.parse_duration( config["key_refresh_interval"] ) self.perspectives = self.read_perspectives( config["perspectives"] ) self.macaroon_secret_key = config.get( "macaroon_secret_key", self.registration_shared_secret ) if not self.macaroon_secret_key: # Unfortunately, there are people out there that don't have this # set. Lets just be "nice" and derive one from their secret key. logger.warn("Config is missing missing macaroon_secret_key") seed = self.signing_key[0].seed self.macaroon_secret_key = hashlib.sha256(seed) self.expire_access_token = config.get("expire_access_token", False) def default_config(self, config_dir_path, server_name, is_generating_file=False, **kwargs): base_key_name = os.path.join(config_dir_path, server_name) if is_generating_file: macaroon_secret_key = random_string_with_symbols(50) else: macaroon_secret_key = None return """\ macaroon_secret_key: "%(macaroon_secret_key)s" # Used to enable access token expiration. expire_access_token: False ## Signing Keys ## # Path to the signing key to sign messages with signing_key_path: "%(base_key_name)s.signing.key" # The keys that the server used to sign messages with but won't use # to sign new messages. E.g. it has lost its private key old_signing_keys: {} # "ed25519:auto": # # Base64 encoded public key # key: "The public part of your old signing key." # # Millisecond POSIX timestamp when the key expired. # expired_ts: 123456789123 # How long key response published by this server is valid for. # Used to set the valid_until_ts in /key/v2 APIs. # Determines how quickly servers will query to check which keys # are still valid. key_refresh_interval: "1d" # 1 Day. # The trusted servers to download signing keys from. perspectives: servers: "matrix.org": verify_keys: "ed25519:auto": key: "Noi6WqcDj0QmPxCNQqgezwTlBKrfqehY1u2FyWP9uYw" """ % locals() def read_perspectives(self, perspectives_config): servers = {} for server_name, server_config in perspectives_config["servers"].items(): for key_id, key_data in server_config["verify_keys"].items(): if is_signing_algorithm_supported(key_id): key_base64 = key_data["key"] key_bytes = decode_base64(key_base64) verify_key = decode_verify_key_bytes(key_id, key_bytes) servers.setdefault(server_name, {})[key_id] = verify_key return servers def read_signing_key(self, signing_key_path): signing_keys = self.read_file(signing_key_path, "signing_key") try: return read_signing_keys(signing_keys.splitlines(True)) except Exception as e: raise ConfigError( "Error reading signing_key: %s" % (str(e)) ) def read_old_signing_keys(self, old_signing_keys): keys = {} for key_id, key_data in old_signing_keys.items(): if is_signing_algorithm_supported(key_id): key_base64 = key_data["key"] key_bytes = decode_base64(key_base64) verify_key = decode_verify_key_bytes(key_id, key_bytes) verify_key.expired_ts = key_data["expired_ts"] keys[key_id] = verify_key else: raise ConfigError( "Unsupported signing algorithm for old key: %r" % (key_id,) ) return keys def generate_files(self, config): signing_key_path = config["signing_key_path"] if not self.path_exists(signing_key_path): with open(signing_key_path, "w") as signing_key_file: key_id = "a_" + random_string(4) write_signing_keys( signing_key_file, (generate_signing_key(key_id),), ) else: signing_keys = self.read_file(signing_key_path, "signing_key") if len(signing_keys.split("\n")[0].split()) == 1: # handle keys in the old format. key_id = "a_" + random_string(4) key = decode_signing_key_base64( NACL_ED25519, key_id, signing_keys.split("\n")[0] ) with open(signing_key_path, "w") as signing_key_file: write_signing_keys( signing_key_file, (key,), ) synapse-0.24.0/synapse/config/logger.py000066400000000000000000000164301317335640100200460ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config from synapse.util.logcontext import LoggingContextFilter from twisted.logger import globalLogBeginner, STDLibLogObserver import logging import logging.config import yaml from string import Template import os import signal DEFAULT_LOG_CONFIG = Template(""" version: 1 formatters: precise: format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s\ - %(message)s' filters: context: (): synapse.util.logcontext.LoggingContextFilter request: "" handlers: file: class: logging.handlers.RotatingFileHandler formatter: precise filename: ${log_file} maxBytes: 104857600 backupCount: 10 filters: [context] console: class: logging.StreamHandler formatter: precise filters: [context] loggers: synapse: level: INFO synapse.storage.SQL: # beware: increasing this to DEBUG will make synapse log sensitive # information such as access tokens. level: INFO root: level: INFO handlers: [file, console] """) class LoggingConfig(Config): def read_config(self, config): self.verbosity = config.get("verbose", 0) self.no_redirect_stdio = config.get("no_redirect_stdio", False) self.log_config = self.abspath(config.get("log_config")) self.log_file = self.abspath(config.get("log_file")) def default_config(self, config_dir_path, server_name, **kwargs): log_file = self.abspath("homeserver.log") log_config = self.abspath( os.path.join(config_dir_path, server_name + ".log.config") ) return """ # Logging verbosity level. Ignored if log_config is specified. verbose: 0 # File to write logging to. Ignored if log_config is specified. log_file: "%(log_file)s" # A yaml python logging config file log_config: "%(log_config)s" """ % locals() def read_arguments(self, args): if args.verbose is not None: self.verbosity = args.verbose if args.no_redirect_stdio is not None: self.no_redirect_stdio = args.no_redirect_stdio if args.log_config is not None: self.log_config = args.log_config if args.log_file is not None: self.log_file = args.log_file def add_arguments(cls, parser): logging_group = parser.add_argument_group("logging") logging_group.add_argument( '-v', '--verbose', dest="verbose", action='count', help="The verbosity level. Specify multiple times to increase " "verbosity. (Ignored if --log-config is specified.)" ) logging_group.add_argument( '-f', '--log-file', dest="log_file", help="File to log to. (Ignored if --log-config is specified.)" ) logging_group.add_argument( '--log-config', dest="log_config", default=None, help="Python logging config file" ) logging_group.add_argument( '-n', '--no-redirect-stdio', action='store_true', default=None, help="Do not redirect stdout/stderr to the log" ) def generate_files(self, config): log_config = config.get("log_config") if log_config and not os.path.exists(log_config): with open(log_config, "wb") as log_config_file: log_config_file.write( DEFAULT_LOG_CONFIG.substitute(log_file=config["log_file"]) ) def setup_logging(config, use_worker_options=False): """ Set up python logging Args: config (LoggingConfig | synapse.config.workers.WorkerConfig): configuration data use_worker_options (bool): True to use 'worker_log_config' and 'worker_log_file' options instead of 'log_config' and 'log_file'. """ log_config = (config.worker_log_config if use_worker_options else config.log_config) log_file = (config.worker_log_file if use_worker_options else config.log_file) log_format = ( "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s" " - %(message)s" ) if log_config is None: level = logging.INFO level_for_storage = logging.INFO if config.verbosity: level = logging.DEBUG if config.verbosity > 1: level_for_storage = logging.DEBUG # FIXME: we need a logging.WARN for a -q quiet option logger = logging.getLogger('') logger.setLevel(level) logging.getLogger('synapse.storage').setLevel(level_for_storage) formatter = logging.Formatter(log_format) if log_file: # TODO: Customisable file size / backup count handler = logging.handlers.RotatingFileHandler( log_file, maxBytes=(1000 * 1000 * 100), backupCount=3 ) def sighup(signum, stack): logger.info("Closing log file due to SIGHUP") handler.doRollover() logger.info("Opened new log file due to SIGHUP") else: handler = logging.StreamHandler() handler.setFormatter(formatter) handler.addFilter(LoggingContextFilter(request="")) logger.addHandler(handler) else: def load_log_config(): with open(log_config, 'r') as f: logging.config.dictConfig(yaml.load(f)) def sighup(signum, stack): # it might be better to use a file watcher or something for this. logging.info("Reloading log config from %s due to SIGHUP", log_config) load_log_config() load_log_config() # TODO(paul): obviously this is a terrible mechanism for # stealing SIGHUP, because it means no other part of synapse # can use it instead. If we want to catch SIGHUP anywhere # else as well, I'd suggest we find a nicer way to broadcast # it around. if getattr(signal, "SIGHUP"): signal.signal(signal.SIGHUP, sighup) # It's critical to point twisted's internal logging somewhere, otherwise it # stacks up and leaks kup to 64K object; # see: https://twistedmatrix.com/trac/ticket/8164 # # Routing to the python logging framework could be a performance problem if # the handlers blocked for a long time as python.logging is a blocking API # see https://twistedmatrix.com/documents/current/core/howto/logger.html # filed as https://github.com/matrix-org/synapse/issues/1727 # # However this may not be too much of a problem if we are just writing to a file. observer = STDLibLogObserver() globalLogBeginner.beginLoggingTo( [observer], redirectStandardIO=not config.no_redirect_stdio, ) synapse-0.24.0/synapse/config/metrics.py000066400000000000000000000023631317335640100202350ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config class MetricsConfig(Config): def read_config(self, config): self.enable_metrics = config["enable_metrics"] self.report_stats = config.get("report_stats", None) self.metrics_port = config.get("metrics_port") self.metrics_bind_host = config.get("metrics_bind_host", "127.0.0.1") def default_config(self, report_stats=None, **kwargs): suffix = "" if report_stats is None else "report_stats: %(report_stats)s\n" return ("""\ ## Metrics ### # Enable collection and rendering of performance metrics enable_metrics: False """ + suffix) % locals() synapse-0.24.0/synapse/config/password.py000066400000000000000000000024061317335640100204270ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config class PasswordConfig(Config): """Password login configuration """ def read_config(self, config): password_config = config.get("password_config", {}) self.password_enabled = password_config.get("enabled", True) self.password_pepper = password_config.get("pepper", "") def default_config(self, config_dir_path, server_name, **kwargs): return """ # Enable password for login. password_config: enabled: true # Uncomment and change to a secret random string for extra security. # DO NOT CHANGE THIS AFTER INITIAL SETUP! #pepper: "" """ synapse-0.24.0/synapse/config/password_auth_providers.py000066400000000000000000000053151317335640100235470ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 Openmarket # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config, ConfigError from synapse.util.module_loader import load_module class PasswordAuthProviderConfig(Config): def read_config(self, config): self.password_providers = [] provider_config = None # We want to be backwards compatible with the old `ldap_config` # param. ldap_config = config.get("ldap_config", {}) self.ldap_enabled = ldap_config.get("enabled", False) if self.ldap_enabled: from ldap_auth_provider import LdapAuthProvider parsed_config = LdapAuthProvider.parse_config(ldap_config) self.password_providers.append((LdapAuthProvider, parsed_config)) providers = config.get("password_providers", []) for provider in providers: # This is for backwards compat when the ldap auth provider resided # in this package. if provider['module'] == "synapse.util.ldap_auth_provider.LdapAuthProvider": from ldap_auth_provider import LdapAuthProvider provider_class = LdapAuthProvider try: provider_config = provider_class.parse_config(provider["config"]) except Exception as e: raise ConfigError( "Failed to parse config for %r: %r" % (provider['module'], e) ) else: (provider_class, provider_config) = load_module(provider) self.password_providers.append((provider_class, provider_config)) def default_config(self, **kwargs): return """\ # password_providers: # - module: "ldap_auth_provider.LdapAuthProvider" # config: # enabled: true # uri: "ldap://ldap.example.com:389" # start_tls: true # base: "ou=users,dc=example,dc=com" # attributes: # uid: "cn" # mail: "email" # name: "givenName" # #bind_dn: # #bind_password: # #filter: "(objectClass=posixAccount)" """ synapse-0.24.0/synapse/config/push.py000066400000000000000000000036271317335640100175520ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config class PushConfig(Config): def read_config(self, config): self.push_redact_content = False push_config = config.get("email", {}) self.push_redact_content = push_config.get("redact_content", False) def default_config(self, config_dir_path, server_name, **kwargs): return """ # Control how push messages are sent to google/apple to notifications. # Normally every message said in a room with one or more people using # mobile devices will be posted to a push server hosted by matrix.org # which is registered with google and apple in order to allow push # notifications to be sent to these mobile devices. # # Setting redact_content to true will make the push messages contain no # message content which will provide increased privacy. This is a # temporary solution pending improvements to Android and iPhone apps # to get content from the app rather than the notification. # # For modern android devices the notification content will still appear # because it is loaded by the app. iPhone, however will send a # notification saying only that a message arrived and who it came from. # #push: # redact_content: false """ synapse-0.24.0/synapse/config/ratelimiting.py000066400000000000000000000042701317335640100212560ustar00rootroot00000000000000# Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config class RatelimitConfig(Config): def read_config(self, config): self.rc_messages_per_second = config["rc_messages_per_second"] self.rc_message_burst_count = config["rc_message_burst_count"] self.federation_rc_window_size = config["federation_rc_window_size"] self.federation_rc_sleep_limit = config["federation_rc_sleep_limit"] self.federation_rc_sleep_delay = config["federation_rc_sleep_delay"] self.federation_rc_reject_limit = config["federation_rc_reject_limit"] self.federation_rc_concurrent = config["federation_rc_concurrent"] def default_config(self, **kwargs): return """\ ## Ratelimiting ## # Number of messages a client can send per second rc_messages_per_second: 0.2 # Number of message a client can send before being throttled rc_message_burst_count: 10.0 # The federation window size in milliseconds federation_rc_window_size: 1000 # The number of federation requests from a single server in a window # before the server will delay processing the request. federation_rc_sleep_limit: 10 # The duration in milliseconds to delay processing events from # remote servers by if they go over the sleep limit. federation_rc_sleep_delay: 500 # The maximum number of concurrent federation requests allowed # from a single server federation_rc_reject_limit: 50 # The number of federation requests to concurrently process from a # single server federation_rc_concurrent: 3 """ synapse-0.24.0/synapse/config/registration.py000066400000000000000000000064441317335640100213050ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config from synapse.util.stringutils import random_string_with_symbols from distutils.util import strtobool class RegistrationConfig(Config): def read_config(self, config): self.enable_registration = bool( strtobool(str(config["enable_registration"])) ) if "disable_registration" in config: self.enable_registration = not bool( strtobool(str(config["disable_registration"])) ) self.registration_shared_secret = config.get("registration_shared_secret") self.bcrypt_rounds = config.get("bcrypt_rounds", 12) self.trusted_third_party_id_servers = config["trusted_third_party_id_servers"] self.allow_guest_access = config.get("allow_guest_access", False) self.invite_3pid_guest = ( self.allow_guest_access and config.get("invite_3pid_guest", False) ) self.auto_join_rooms = config.get("auto_join_rooms", []) def default_config(self, **kwargs): registration_shared_secret = random_string_with_symbols(50) return """\ ## Registration ## # Enable registration for new users. enable_registration: False # If set, allows registration by anyone who also has the shared # secret, even if registration is otherwise disabled. registration_shared_secret: "%(registration_shared_secret)s" # Set the number of bcrypt rounds used to generate password hash. # Larger numbers increase the work factor needed to generate the hash. # The default number of rounds is 12. bcrypt_rounds: 12 # Allows users to register as guests without a password/email/etc, and # participate in rooms hosted on this server which have been made # accessible to anonymous users. allow_guest_access: False # The list of identity servers trusted to verify third party # identifiers by this server. trusted_third_party_id_servers: - matrix.org - vector.im - riot.im # Users who register on this homeserver will automatically be joined # to these rooms #auto_join_rooms: # - "#example:example.com" """ % locals() def add_arguments(self, parser): reg_group = parser.add_argument_group("registration") reg_group.add_argument( "--enable-registration", action="store_true", default=None, help="Enable registration for new users." ) def read_arguments(self, args): if args.enable_registration is not None: self.enable_registration = bool( strtobool(str(args.enable_registration)) ) synapse-0.24.0/synapse/config/repository.py000066400000000000000000000217271317335640100210130ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014, 2015 matrix.org # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config, ConfigError from collections import namedtuple MISSING_NETADDR = ( "Missing netaddr library. This is required for URL preview API." ) MISSING_LXML = ( """Missing lxml library. This is required for URL preview API. Install by running: pip install lxml Requires libxslt1-dev system package. """ ) ThumbnailRequirement = namedtuple( "ThumbnailRequirement", ["width", "height", "method", "media_type"] ) def parse_thumbnail_requirements(thumbnail_sizes): """ Takes a list of dictionaries with "width", "height", and "method" keys and creates a map from image media types to the thumbnail size, thumbnailing method, and thumbnail media type to precalculate Args: thumbnail_sizes(list): List of dicts with "width", "height", and "method" keys Returns: Dictionary mapping from media type string to list of ThumbnailRequirement tuples. """ requirements = {} for size in thumbnail_sizes: width = size["width"] height = size["height"] method = size["method"] jpeg_thumbnail = ThumbnailRequirement(width, height, method, "image/jpeg") png_thumbnail = ThumbnailRequirement(width, height, method, "image/png") requirements.setdefault("image/jpeg", []).append(jpeg_thumbnail) requirements.setdefault("image/gif", []).append(png_thumbnail) requirements.setdefault("image/png", []).append(png_thumbnail) return { media_type: tuple(thumbnails) for media_type, thumbnails in requirements.items() } class ContentRepositoryConfig(Config): def read_config(self, config): self.max_upload_size = self.parse_size(config["max_upload_size"]) self.max_image_pixels = self.parse_size(config["max_image_pixels"]) self.max_spider_size = self.parse_size(config["max_spider_size"]) self.media_store_path = self.ensure_directory(config["media_store_path"]) self.backup_media_store_path = config.get("backup_media_store_path") if self.backup_media_store_path: self.backup_media_store_path = self.ensure_directory( self.backup_media_store_path ) self.synchronous_backup_media_store = config.get( "synchronous_backup_media_store", False ) self.uploads_path = self.ensure_directory(config["uploads_path"]) self.dynamic_thumbnails = config["dynamic_thumbnails"] self.thumbnail_requirements = parse_thumbnail_requirements( config["thumbnail_sizes"] ) self.url_preview_enabled = config.get("url_preview_enabled", False) if self.url_preview_enabled: try: import lxml lxml # To stop unused lint. except ImportError: raise ConfigError(MISSING_LXML) try: from netaddr import IPSet except ImportError: raise ConfigError(MISSING_NETADDR) if "url_preview_ip_range_blacklist" in config: self.url_preview_ip_range_blacklist = IPSet( config["url_preview_ip_range_blacklist"] ) else: raise ConfigError( "For security, you must specify an explicit target IP address " "blacklist in url_preview_ip_range_blacklist for url previewing " "to work" ) self.url_preview_ip_range_whitelist = IPSet( config.get("url_preview_ip_range_whitelist", ()) ) self.url_preview_url_blacklist = config.get( "url_preview_url_blacklist", () ) def default_config(self, **kwargs): media_store = self.default_path("media_store") uploads_path = self.default_path("uploads") return """ # Directory where uploaded images and attachments are stored. media_store_path: "%(media_store)s" # A secondary directory where uploaded images and attachments are # stored as a backup. # backup_media_store_path: "%(media_store)s" # Whether to wait for successful write to backup media store before # returning successfully. # synchronous_backup_media_store: false # Directory where in-progress uploads are stored. uploads_path: "%(uploads_path)s" # The largest allowed upload size in bytes max_upload_size: "10M" # Maximum number of pixels that will be thumbnailed max_image_pixels: "32M" # Whether to generate new thumbnails on the fly to precisely match # the resolution requested by the client. If true then whenever # a new resolution is requested by the client the server will # generate a new thumbnail. If false the server will pick a thumbnail # from a precalculated list. dynamic_thumbnails: false # List of thumbnail to precalculate when an image is uploaded. thumbnail_sizes: - width: 32 height: 32 method: crop - width: 96 height: 96 method: crop - width: 320 height: 240 method: scale - width: 640 height: 480 method: scale - width: 800 height: 600 method: scale # Is the preview URL API enabled? If enabled, you *must* specify # an explicit url_preview_ip_range_blacklist of IPs that the spider is # denied from accessing. url_preview_enabled: False # List of IP address CIDR ranges that the URL preview spider is denied # from accessing. There are no defaults: you must explicitly # specify a list for URL previewing to work. You should specify any # internal services in your network that you do not want synapse to try # to connect to, otherwise anyone in any Matrix room could cause your # synapse to issue arbitrary GET requests to your internal services, # causing serious security issues. # # url_preview_ip_range_blacklist: # - '127.0.0.0/8' # - '10.0.0.0/8' # - '172.16.0.0/12' # - '192.168.0.0/16' # - '100.64.0.0/10' # - '169.254.0.0/16' # # List of IP address CIDR ranges that the URL preview spider is allowed # to access even if they are specified in url_preview_ip_range_blacklist. # This is useful for specifying exceptions to wide-ranging blacklisted # target IP ranges - e.g. for enabling URL previews for a specific private # website only visible in your network. # # url_preview_ip_range_whitelist: # - '192.168.1.1' # Optional list of URL matches that the URL preview spider is # denied from accessing. You should use url_preview_ip_range_blacklist # in preference to this, otherwise someone could define a public DNS # entry that points to a private IP address and circumvent the blacklist. # This is more useful if you know there is an entire shape of URL that # you know that will never want synapse to try to spider. # # Each list entry is a dictionary of url component attributes as returned # by urlparse.urlsplit as applied to the absolute form of the URL. See # https://docs.python.org/2/library/urlparse.html#urlparse.urlsplit # The values of the dictionary are treated as an filename match pattern # applied to that component of URLs, unless they start with a ^ in which # case they are treated as a regular expression match. If all the # specified component matches for a given list item succeed, the URL is # blacklisted. # # url_preview_url_blacklist: # # blacklist any URL with a username in its URI # - username: '*' # # # blacklist all *.google.com URLs # - netloc: 'google.com' # - netloc: '*.google.com' # # # blacklist all plain HTTP URLs # - scheme: 'http' # # # blacklist http(s)://www.acme.com/foo # - netloc: 'www.acme.com' # path: '/foo' # # # blacklist any URL with a literal IPv4 address # - netloc: '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$' # The largest allowed URL preview spidering size in bytes max_spider_size: "10M" """ % locals() synapse-0.24.0/synapse/config/saml2.py000066400000000000000000000042461317335640100176070ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015 Ericsson # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config class SAML2Config(Config): """SAML2 Configuration Synapse uses pysaml2 libraries for providing SAML2 support config_path: Path to the sp_conf.py configuration file idp_redirect_url: Identity provider URL which will redirect the user back to /login/saml2 with proper info. sp_conf.py file is something like: https://github.com/rohe/pysaml2/blob/master/example/sp-repoze/sp_conf.py.example More information: https://pythonhosted.org/pysaml2/howto/config.html """ def read_config(self, config): saml2_config = config.get("saml2_config", None) if saml2_config: self.saml2_enabled = saml2_config.get("enabled", True) self.saml2_config_path = saml2_config["config_path"] self.saml2_idp_redirect_url = saml2_config["idp_redirect_url"] else: self.saml2_enabled = False self.saml2_config_path = None self.saml2_idp_redirect_url = None def default_config(self, config_dir_path, server_name, **kwargs): return """ # Enable SAML2 for registration and login. Uses pysaml2 # config_path: Path to the sp_conf.py configuration file # idp_redirect_url: Identity provider URL which will redirect # the user back to /login/saml2 with proper info. # See pysaml2 docs for format of config. #saml2_config: # enabled: true # config_path: "%s/sp_conf.py" # idp_redirect_url: "http://%s/idp" """ % (config_dir_path, server_name) synapse-0.24.0/synapse/config/server.py000066400000000000000000000274271317335640100201050ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config, ConfigError class ServerConfig(Config): def read_config(self, config): self.server_name = config["server_name"] self.pid_file = self.abspath(config.get("pid_file")) self.web_client = config["web_client"] self.web_client_location = config.get("web_client_location", None) self.soft_file_limit = config["soft_file_limit"] self.daemonize = config.get("daemonize") self.print_pidfile = config.get("print_pidfile") self.user_agent_suffix = config.get("user_agent_suffix") self.use_frozen_dicts = config.get("use_frozen_dicts", False) self.public_baseurl = config.get("public_baseurl") self.cpu_affinity = config.get("cpu_affinity") # Whether to send federation traffic out in this process. This only # applies to some federation traffic, and so shouldn't be used to # "disable" federation self.send_federation = config.get("send_federation", True) # Whether to update the user directory or not. This should be set to # false only if we are updating the user directory in a worker self.update_user_directory = config.get("update_user_directory", True) self.filter_timeline_limit = config.get("filter_timeline_limit", -1) # Whether we should block invites sent to users on this server # (other than those sent by local server admins) self.block_non_admin_invites = config.get( "block_non_admin_invites", False, ) if self.public_baseurl is not None: if self.public_baseurl[-1] != '/': self.public_baseurl += '/' self.start_pushers = config.get("start_pushers", True) self.listeners = config.get("listeners", []) for listener in self.listeners: bind_address = listener.pop("bind_address", None) bind_addresses = listener.setdefault("bind_addresses", []) if bind_address: bind_addresses.append(bind_address) elif not bind_addresses: bind_addresses.append('') self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None)) bind_port = config.get("bind_port") if bind_port: self.listeners = [] bind_host = config.get("bind_host", "") gzip_responses = config.get("gzip_responses", True) names = ["client", "webclient"] if self.web_client else ["client"] self.listeners.append({ "port": bind_port, "bind_addresses": [bind_host], "tls": True, "type": "http", "resources": [ { "names": names, "compress": gzip_responses, }, { "names": ["federation"], "compress": False, } ] }) unsecure_port = config.get("unsecure_port", bind_port - 400) if unsecure_port: self.listeners.append({ "port": unsecure_port, "bind_addresses": [bind_host], "tls": False, "type": "http", "resources": [ { "names": names, "compress": gzip_responses, }, { "names": ["federation"], "compress": False, } ] }) manhole = config.get("manhole") if manhole: self.listeners.append({ "port": manhole, "bind_addresses": ["127.0.0.1"], "type": "manhole", }) metrics_port = config.get("metrics_port") if metrics_port: self.listeners.append({ "port": metrics_port, "bind_addresses": [config.get("metrics_bind_host", "127.0.0.1")], "tls": False, "type": "http", "resources": [ { "names": ["metrics"], "compress": False, }, ] }) def default_config(self, server_name, **kwargs): if ":" in server_name: bind_port = int(server_name.split(":")[1]) unsecure_port = bind_port - 400 else: bind_port = 8448 unsecure_port = 8008 pid_file = self.abspath("homeserver.pid") return """\ ## Server ## # The domain name of the server, with optional explicit port. # This is used by remote servers to connect to this server, # e.g. matrix.org, localhost:8080, etc. # This is also the last part of your UserID. server_name: "%(server_name)s" # When running as a daemon, the file to store the pid in pid_file: %(pid_file)s # CPU affinity mask. Setting this restricts the CPUs on which the # process will be scheduled. It is represented as a bitmask, with the # lowest order bit corresponding to the first logical CPU and the # highest order bit corresponding to the last logical CPU. Not all CPUs # may exist on a given system but a mask may specify more CPUs than are # present. # # For example: # 0x00000001 is processor #0, # 0x00000003 is processors #0 and #1, # 0xFFFFFFFF is all processors (#0 through #31). # # Pinning a Python process to a single CPU is desirable, because Python # is inherently single-threaded due to the GIL, and can suffer a # 30-40%% slowdown due to cache blow-out and thread context switching # if the scheduler happens to schedule the underlying threads across # different cores. See # https://www.mirantis.com/blog/improve-performance-python-programs-restricting-single-cpu/. # # cpu_affinity: 0xFFFFFFFF # Whether to serve a web client from the HTTP/HTTPS root resource. web_client: True # The root directory to server for the above web client. # If left undefined, synapse will serve the matrix-angular-sdk web client. # Make sure matrix-angular-sdk is installed with pip if web_client is True # and web_client_location is undefined # web_client_location: "/path/to/web/root" # The public-facing base URL for the client API (not including _matrix/...) # public_baseurl: https://example.com:8448/ # Set the soft limit on the number of file descriptors synapse can use # Zero is used to indicate synapse should set the soft limit to the # hard limit. soft_file_limit: 0 # The GC threshold parameters to pass to `gc.set_threshold`, if defined # gc_thresholds: [700, 10, 10] # Set the limit on the returned events in the timeline in the get # and sync operations. The default value is -1, means no upper limit. # filter_timeline_limit: 5000 # Whether room invites to users on this server should be blocked # (except those sent by local server admins). The default is False. # block_non_admin_invites: True # List of ports that Synapse should listen on, their purpose and their # configuration. listeners: # Main HTTPS listener # For when matrix traffic is sent directly to synapse. - # The port to listen for HTTPS requests on. port: %(bind_port)s # Local addresses to listen on. # This will listen on all IPv4 addresses by default. bind_addresses: - '0.0.0.0' # Uncomment to listen on all IPv6 interfaces # N.B: On at least Linux this will also listen on all IPv4 # addresses, so you will need to comment out the line above. # - '::' # This is a 'http' listener, allows us to specify 'resources'. type: http tls: true # Use the X-Forwarded-For (XFF) header as the client IP and not the # actual client IP. x_forwarded: false # List of HTTP resources to serve on this listener. resources: - # List of resources to host on this listener. names: - client # The client-server APIs, both v1 and v2 - webclient # The bundled webclient. # Should synapse compress HTTP responses to clients that support it? # This should be disabled if running synapse behind a load balancer # that can do automatic compression. compress: true - names: [federation] # Federation APIs compress: false # Unsecure HTTP listener, # For when matrix traffic passes through loadbalancer that unwraps TLS. - port: %(unsecure_port)s tls: false bind_addresses: ['0.0.0.0'] type: http x_forwarded: false resources: - names: [client, webclient] compress: true - names: [federation] compress: false # Turn on the twisted ssh manhole service on localhost on the given # port. # - port: 9000 # bind_address: 127.0.0.1 # type: manhole """ % locals() def read_arguments(self, args): if args.manhole is not None: self.manhole = args.manhole if args.daemonize is not None: self.daemonize = args.daemonize if args.print_pidfile is not None: self.print_pidfile = args.print_pidfile def add_arguments(self, parser): server_group = parser.add_argument_group("server") server_group.add_argument("-D", "--daemonize", action='store_true', default=None, help="Daemonize the home server") server_group.add_argument("--print-pidfile", action='store_true', default=None, help="Print the path to the pidfile just" " before daemonizing") server_group.add_argument("--manhole", metavar="PORT", dest="manhole", type=int, help="Turn on the twisted telnet manhole" " service on the given port.") def read_gc_thresholds(thresholds): """Reads the three integer thresholds for garbage collection. Ensures that the thresholds are integers if thresholds are supplied. """ if thresholds is None: return None try: assert len(thresholds) == 3 return ( int(thresholds[0]), int(thresholds[1]), int(thresholds[2]), ) except: raise ConfigError( "Value of `gc_threshold` must be a list of three integers if set" ) synapse-0.24.0/synapse/config/spam_checker.py000066400000000000000000000021711317335640100212100ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.util.module_loader import load_module from ._base import Config class SpamCheckerConfig(Config): def read_config(self, config): self.spam_checker = None provider = config.get("spam_checker", None) if provider is not None: self.spam_checker = load_module(provider) def default_config(self, **kwargs): return """\ # spam_checker: # module: "my_custom_project.SuperSpamChecker" # config: # example_option: 'things' """ synapse-0.24.0/synapse/config/tls.py000066400000000000000000000175611317335640100173770ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config from OpenSSL import crypto import subprocess import os from hashlib import sha256 from unpaddedbase64 import encode_base64 GENERATE_DH_PARAMS = False class TlsConfig(Config): def read_config(self, config): self.tls_certificate = self.read_tls_certificate( config.get("tls_certificate_path") ) self.tls_certificate_file = config.get("tls_certificate_path") self.no_tls = config.get("no_tls", False) if self.no_tls: self.tls_private_key = None else: self.tls_private_key = self.read_tls_private_key( config.get("tls_private_key_path") ) self.tls_dh_params_path = self.check_file( config.get("tls_dh_params_path"), "tls_dh_params" ) self.tls_fingerprints = config["tls_fingerprints"] # Check that our own certificate is included in the list of fingerprints # and include it if it is not. x509_certificate_bytes = crypto.dump_certificate( crypto.FILETYPE_ASN1, self.tls_certificate ) sha256_fingerprint = encode_base64(sha256(x509_certificate_bytes).digest()) sha256_fingerprints = set(f["sha256"] for f in self.tls_fingerprints) if sha256_fingerprint not in sha256_fingerprints: self.tls_fingerprints.append({u"sha256": sha256_fingerprint}) # This config option applies to non-federation HTTP clients # (e.g. for talking to recaptcha, identity servers, and such) # It should never be used in production, and is intended for # use only when running tests. self.use_insecure_ssl_client_just_for_testing_do_not_use = config.get( "use_insecure_ssl_client_just_for_testing_do_not_use" ) def default_config(self, config_dir_path, server_name, **kwargs): base_key_name = os.path.join(config_dir_path, server_name) tls_certificate_path = base_key_name + ".tls.crt" tls_private_key_path = base_key_name + ".tls.key" tls_dh_params_path = base_key_name + ".tls.dh" return """\ # PEM encoded X509 certificate for TLS. # You can replace the self-signed certificate that synapse # autogenerates on launch with your own SSL certificate + key pair # if you like. Any required intermediary certificates can be # appended after the primary certificate in hierarchical order. tls_certificate_path: "%(tls_certificate_path)s" # PEM encoded private key for TLS tls_private_key_path: "%(tls_private_key_path)s" # PEM dh parameters for ephemeral keys tls_dh_params_path: "%(tls_dh_params_path)s" # Don't bind to the https port no_tls: False # List of allowed TLS fingerprints for this server to publish along # with the signing keys for this server. Other matrix servers that # make HTTPS requests to this server will check that the TLS # certificates returned by this server match one of the fingerprints. # # Synapse automatically adds the fingerprint of its own certificate # to the list. So if federation traffic is handle directly by synapse # then no modification to the list is required. # # If synapse is run behind a load balancer that handles the TLS then it # will be necessary to add the fingerprints of the certificates used by # the loadbalancers to this list if they are different to the one # synapse is using. # # Homeservers are permitted to cache the list of TLS fingerprints # returned in the key responses up to the "valid_until_ts" returned in # key. It may be necessary to publish the fingerprints of a new # certificate and wait until the "valid_until_ts" of the previous key # responses have passed before deploying it. tls_fingerprints: [] # tls_fingerprints: [{"sha256": ""}] """ % locals() def read_tls_certificate(self, cert_path): cert_pem = self.read_file(cert_path, "tls_certificate") return crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem) def read_tls_private_key(self, private_key_path): private_key_pem = self.read_file(private_key_path, "tls_private_key") return crypto.load_privatekey(crypto.FILETYPE_PEM, private_key_pem) def generate_files(self, config): tls_certificate_path = config["tls_certificate_path"] tls_private_key_path = config["tls_private_key_path"] tls_dh_params_path = config["tls_dh_params_path"] if not self.path_exists(tls_private_key_path): with open(tls_private_key_path, "w") as private_key_file: tls_private_key = crypto.PKey() tls_private_key.generate_key(crypto.TYPE_RSA, 2048) private_key_pem = crypto.dump_privatekey( crypto.FILETYPE_PEM, tls_private_key ) private_key_file.write(private_key_pem) else: with open(tls_private_key_path) as private_key_file: private_key_pem = private_key_file.read() tls_private_key = crypto.load_privatekey( crypto.FILETYPE_PEM, private_key_pem ) if not self.path_exists(tls_certificate_path): with open(tls_certificate_path, "w") as certificate_file: cert = crypto.X509() subject = cert.get_subject() subject.CN = config["server_name"] cert.set_serial_number(1000) cert.gmtime_adj_notBefore(0) cert.gmtime_adj_notAfter(10 * 365 * 24 * 60 * 60) cert.set_issuer(cert.get_subject()) cert.set_pubkey(tls_private_key) cert.sign(tls_private_key, 'sha256') cert_pem = crypto.dump_certificate(crypto.FILETYPE_PEM, cert) certificate_file.write(cert_pem) if not self.path_exists(tls_dh_params_path): if GENERATE_DH_PARAMS: subprocess.check_call([ "openssl", "dhparam", "-outform", "PEM", "-out", tls_dh_params_path, "2048" ]) else: with open(tls_dh_params_path, "w") as dh_params_file: dh_params_file.write( "2048-bit DH parameters taken from rfc3526\n" "-----BEGIN DH PARAMETERS-----\n" "MIIBCAKCAQEA///////////JD9qiIWjC" "NMTGYouA3BzRKQJOCIpnzHQCC76mOxOb\n" "IlFKCHmONATd75UZs806QxswKwpt8l8U" "N0/hNW1tUcJF5IW1dmJefsb0TELppjft\n" "awv/XLb0Brft7jhr+1qJn6WunyQRfEsf" "5kkoZlHs5Fs9wgB8uKFjvwWY2kg2HFXT\n" "mmkWP6j9JM9fg2VdI9yjrZYcYvNWIIVS" "u57VKQdwlpZtZww1Tkq8mATxdGwIyhgh\n" "fDKQXkYuNs474553LBgOhgObJ4Oi7Aei" "j7XFXfBvTFLJ3ivL9pVYFxg5lUl86pVq\n" "5RXSJhiY+gUQFXKOWoqsqmj/////////" "/wIBAg==\n" "-----END DH PARAMETERS-----\n" ) synapse-0.24.0/synapse/config/voip.py000066400000000000000000000037651317335640100175530ustar00rootroot00000000000000# Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config class VoipConfig(Config): def read_config(self, config): self.turn_uris = config.get("turn_uris", []) self.turn_shared_secret = config.get("turn_shared_secret") self.turn_username = config.get("turn_username") self.turn_password = config.get("turn_password") self.turn_user_lifetime = self.parse_duration(config["turn_user_lifetime"]) self.turn_allow_guests = config.get("turn_allow_guests", True) def default_config(self, **kwargs): return """\ ## Turn ## # The public URIs of the TURN server to give to clients turn_uris: [] # The shared secret used to compute passwords for the TURN server turn_shared_secret: "YOUR_SHARED_SECRET" # The Username and password if the TURN server needs them and # does not use a token #turn_username: "TURNSERVER_USERNAME" #turn_password: "TURNSERVER_PASSWORD" # How long generated TURN credentials last turn_user_lifetime: "1h" # Whether guests should be allowed to use the TURN server. # This defaults to True, otherwise VoIP will be unreliable for guests. # However, it does introduce a slight security risk as it allows users to # connect to arbitrary endpoints without having first signed up for a # valid account (e.g. by passing a CAPTCHA). turn_allow_guests: True """ synapse-0.24.0/synapse/config/workers.py000066400000000000000000000040051317335640100202560ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 matrix.org # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import Config class WorkerConfig(Config): """The workers are processes run separately to the main synapse process. They have their own pid_file and listener configuration. They use the replication_url to talk to the main synapse process.""" def read_config(self, config): self.worker_app = config.get("worker_app") self.worker_listeners = config.get("worker_listeners") self.worker_daemonize = config.get("worker_daemonize") self.worker_pid_file = config.get("worker_pid_file") self.worker_log_file = config.get("worker_log_file") self.worker_log_config = config.get("worker_log_config") self.worker_replication_host = config.get("worker_replication_host", None) self.worker_replication_port = config.get("worker_replication_port", None) self.worker_name = config.get("worker_name", self.worker_app) self.worker_main_http_uri = config.get("worker_main_http_uri", None) self.worker_cpu_affinity = config.get("worker_cpu_affinity") if self.worker_listeners: for listener in self.worker_listeners: bind_address = listener.pop("bind_address", None) bind_addresses = listener.setdefault("bind_addresses", []) if bind_address: bind_addresses.append(bind_address) elif not bind_addresses: bind_addresses.append('') synapse-0.24.0/synapse/crypto/000077500000000000000000000000001317335640100162645ustar00rootroot00000000000000synapse-0.24.0/synapse/crypto/__init__.py000066400000000000000000000011371317335640100203770ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. synapse-0.24.0/synapse/crypto/context_factory.py000066400000000000000000000033571317335640100220610ustar00rootroot00000000000000# Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import ssl from OpenSSL import SSL from twisted.internet._sslverify import _OpenSSLECCurve, _defaultCurveName import logging logger = logging.getLogger(__name__) class ServerContextFactory(ssl.ContextFactory): """Factory for PyOpenSSL SSL contexts that are used to handle incoming connections and to make connections to remote servers.""" def __init__(self, config): self._context = SSL.Context(SSL.SSLv23_METHOD) self.configure_context(self._context, config) @staticmethod def configure_context(context, config): try: _ecCurve = _OpenSSLECCurve(_defaultCurveName) _ecCurve.addECKeyToContext(context) except: logger.exception("Failed to enable elliptic curve for TLS") context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3) context.use_certificate_chain_file(config.tls_certificate_file) if not config.no_tls: context.use_privatekey(config.tls_private_key) context.load_tmp_dh(config.tls_dh_params_path) context.set_cipher_list("!ADH:HIGH+kEDH:!AECDH:HIGH+kEECDH") def getContext(self): return self._context synapse-0.24.0/synapse/crypto/event_signing.py000066400000000000000000000074411317335640100215030ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.api.errors import SynapseError, Codes from synapse.events.utils import prune_event from canonicaljson import encode_canonical_json from unpaddedbase64 import encode_base64, decode_base64 from signedjson.sign import sign_json import hashlib import logging logger = logging.getLogger(__name__) def check_event_content_hash(event, hash_algorithm=hashlib.sha256): """Check whether the hash for this PDU matches the contents""" name, expected_hash = compute_content_hash(event, hash_algorithm) logger.debug("Expecting hash: %s", encode_base64(expected_hash)) if name not in event.hashes: raise SynapseError( 400, "Algorithm %s not in hashes %s" % ( name, list(event.hashes), ), Codes.UNAUTHORIZED, ) message_hash_base64 = event.hashes[name] try: message_hash_bytes = decode_base64(message_hash_base64) except: raise SynapseError( 400, "Invalid base64: %s" % (message_hash_base64,), Codes.UNAUTHORIZED, ) return message_hash_bytes == expected_hash def compute_content_hash(event, hash_algorithm): event_json = event.get_pdu_json() event_json.pop("age_ts", None) event_json.pop("unsigned", None) event_json.pop("signatures", None) event_json.pop("hashes", None) event_json.pop("outlier", None) event_json.pop("destinations", None) event_json_bytes = encode_canonical_json(event_json) hashed = hash_algorithm(event_json_bytes) return (hashed.name, hashed.digest()) def compute_event_reference_hash(event, hash_algorithm=hashlib.sha256): tmp_event = prune_event(event) event_json = tmp_event.get_pdu_json() event_json.pop("signatures", None) event_json.pop("age_ts", None) event_json.pop("unsigned", None) event_json_bytes = encode_canonical_json(event_json) hashed = hash_algorithm(event_json_bytes) return (hashed.name, hashed.digest()) def compute_event_signature(event, signature_name, signing_key): tmp_event = prune_event(event) redact_json = tmp_event.get_pdu_json() redact_json.pop("age_ts", None) redact_json.pop("unsigned", None) logger.debug("Signing event: %s", encode_canonical_json(redact_json)) redact_json = sign_json(redact_json, signature_name, signing_key) logger.debug("Signed event: %s", encode_canonical_json(redact_json)) return redact_json["signatures"] def add_hashes_and_signatures(event, signature_name, signing_key, hash_algorithm=hashlib.sha256): # if hasattr(event, "old_state_events"): # state_json_bytes = encode_canonical_json( # [e.event_id for e in event.old_state_events.values()] # ) # hashed = hash_algorithm(state_json_bytes) # event.state_hash = { # hashed.name: encode_base64(hashed.digest()) # } name, digest = compute_content_hash(event, hash_algorithm=hash_algorithm) if not hasattr(event, "hashes"): event.hashes = {} event.hashes[name] = encode_base64(digest) event.signatures = compute_event_signature( event, signature_name=signature_name, signing_key=signing_key, ) synapse-0.24.0/synapse/crypto/keyclient.py000066400000000000000000000106341317335640100206310ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.util import logcontext from twisted.web.http import HTTPClient from twisted.internet.protocol import Factory from twisted.internet import defer, reactor from synapse.http.endpoint import matrix_federation_endpoint import simplejson as json import logging logger = logging.getLogger(__name__) KEY_API_V1 = b"/_matrix/key/v1/" @defer.inlineCallbacks def fetch_server_key(server_name, ssl_context_factory, path=KEY_API_V1): """Fetch the keys for a remote server.""" factory = SynapseKeyClientFactory() factory.path = path factory.host = server_name endpoint = matrix_federation_endpoint( reactor, server_name, ssl_context_factory, timeout=30 ) for i in range(5): try: with logcontext.PreserveLoggingContext(): protocol = yield endpoint.connect(factory) server_response, server_certificate = yield protocol.remote_key defer.returnValue((server_response, server_certificate)) except SynapseKeyClientError as e: logger.exception("Error getting key for %r" % (server_name,)) if e.status.startswith("4"): # Don't retry for 4xx responses. raise IOError("Cannot get key for %r" % server_name) except Exception as e: logger.exception(e) raise IOError("Cannot get key for %r" % server_name) class SynapseKeyClientError(Exception): """The key wasn't retrieved from the remote server.""" status = None pass class SynapseKeyClientProtocol(HTTPClient): """Low level HTTPS client which retrieves an application/json response from the server and extracts the X.509 certificate for the remote peer from the SSL connection.""" timeout = 30 def __init__(self): self.remote_key = defer.Deferred() self.host = None self._peer = None def connectionMade(self): self._peer = self.transport.getPeer() logger.debug("Connected to %s", self._peer) self.sendCommand(b"GET", self.path) if self.host: self.sendHeader(b"Host", self.host) self.endHeaders() self.timer = reactor.callLater( self.timeout, self.on_timeout ) def errback(self, error): if not self.remote_key.called: self.remote_key.errback(error) def callback(self, result): if not self.remote_key.called: self.remote_key.callback(result) def handleStatus(self, version, status, message): if status != b"200": # logger.info("Non-200 response from %s: %s %s", # self.transport.getHost(), status, message) error = SynapseKeyClientError( "Non-200 response %r from %r" % (status, self.host) ) error.status = status self.errback(error) self.transport.abortConnection() def handleResponse(self, response_body_bytes): try: json_response = json.loads(response_body_bytes) except ValueError: # logger.info("Invalid JSON response from %s", # self.transport.getHost()) self.transport.abortConnection() return certificate = self.transport.getPeerCertificate() self.callback((json_response, certificate)) self.transport.abortConnection() self.timer.cancel() def on_timeout(self): logger.debug( "Timeout waiting for response from %s: %s", self.host, self._peer, ) self.errback(IOError("Timeout waiting for response")) self.transport.abortConnection() class SynapseKeyClientFactory(Factory): def protocol(self): protocol = SynapseKeyClientProtocol() protocol.path = self.path protocol.host = self.host return protocol synapse-0.24.0/synapse/crypto/keyring.py000066400000000000000000000676371317335640100203310ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # Copyright 2017 New Vector Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.crypto.keyclient import fetch_server_key from synapse.api.errors import SynapseError, Codes from synapse.util import unwrapFirstError, logcontext from synapse.util.logcontext import ( PreserveLoggingContext, preserve_fn ) from synapse.util.metrics import Measure from twisted.internet import defer from signedjson.sign import ( verify_signed_json, signature_ids, sign_json, encode_canonical_json ) from signedjson.key import ( is_signing_algorithm_supported, decode_verify_key_bytes ) from unpaddedbase64 import decode_base64, encode_base64 from OpenSSL import crypto from collections import namedtuple import urllib import hashlib import logging logger = logging.getLogger(__name__) VerifyKeyRequest = namedtuple("VerifyRequest", ( "server_name", "key_ids", "json_object", "deferred" )) """ A request for a verify key to verify a JSON object. Attributes: server_name(str): The name of the server to verify against. key_ids(set(str)): The set of key_ids to that could be used to verify the JSON object json_object(dict): The JSON object to verify. deferred(twisted.internet.defer.Deferred): A deferred (server_name, key_id, verify_key) tuple that resolves when a verify key has been fetched. The deferreds' callbacks are run with no logcontext. """ class KeyLookupError(ValueError): pass class Keyring(object): def __init__(self, hs): self.store = hs.get_datastore() self.clock = hs.get_clock() self.client = hs.get_http_client() self.config = hs.get_config() self.perspective_servers = self.config.perspectives self.hs = hs # map from server name to Deferred. Has an entry for each server with # an ongoing key download; the Deferred completes once the download # completes. # # These are regular, logcontext-agnostic Deferreds. self.key_downloads = {} def verify_json_for_server(self, server_name, json_object): return logcontext.make_deferred_yieldable( self.verify_json_objects_for_server( [(server_name, json_object)] )[0] ) def verify_json_objects_for_server(self, server_and_json): """Bulk verifies signatures of json objects, bulk fetching keys as necessary. Args: server_and_json (list): List of pairs of (server_name, json_object) Returns: List: for each input pair, a deferred indicating success or failure to verify each json object's signature for the given server_name. The deferreds run their callbacks in the sentinel logcontext. """ verify_requests = [] for server_name, json_object in server_and_json: key_ids = signature_ids(json_object, server_name) if not key_ids: logger.warn("Request from %s: no supported signature keys", server_name) deferred = defer.fail(SynapseError( 400, "Not signed with a supported algorithm", Codes.UNAUTHORIZED, )) else: deferred = defer.Deferred() logger.debug("Verifying for %s with key_ids %s", server_name, key_ids) verify_request = VerifyKeyRequest( server_name, key_ids, json_object, deferred ) verify_requests.append(verify_request) preserve_fn(self._start_key_lookups)(verify_requests) # Pass those keys to handle_key_deferred so that the json object # signatures can be verified handle = preserve_fn(_handle_key_deferred) return [ handle(rq) for rq in verify_requests ] @defer.inlineCallbacks def _start_key_lookups(self, verify_requests): """Sets off the key fetches for each verify request Once each fetch completes, verify_request.deferred will be resolved. Args: verify_requests (List[VerifyKeyRequest]): """ # create a deferred for each server we're going to look up the keys # for; we'll resolve them once we have completed our lookups. # These will be passed into wait_for_previous_lookups to block # any other lookups until we have finished. # The deferreds are called with no logcontext. server_to_deferred = { rq.server_name: defer.Deferred() for rq in verify_requests } # We want to wait for any previous lookups to complete before # proceeding. yield self.wait_for_previous_lookups( [rq.server_name for rq in verify_requests], server_to_deferred, ) # Actually start fetching keys. self._get_server_verify_keys(verify_requests) # When we've finished fetching all the keys for a given server_name, # resolve the deferred passed to `wait_for_previous_lookups` so that # any lookups waiting will proceed. # # map from server name to a set of request ids server_to_request_ids = {} for verify_request in verify_requests: server_name = verify_request.server_name request_id = id(verify_request) server_to_request_ids.setdefault(server_name, set()).add(request_id) def remove_deferreds(res, verify_request): server_name = verify_request.server_name request_id = id(verify_request) server_to_request_ids[server_name].discard(request_id) if not server_to_request_ids[server_name]: d = server_to_deferred.pop(server_name, None) if d: d.callback(None) return res for verify_request in verify_requests: verify_request.deferred.addBoth( remove_deferreds, verify_request, ) @defer.inlineCallbacks def wait_for_previous_lookups(self, server_names, server_to_deferred): """Waits for any previous key lookups for the given servers to finish. Args: server_names (list): list of server_names we want to lookup server_to_deferred (dict): server_name to deferred which gets resolved once we've finished looking up keys for that server. The Deferreds should be regular twisted ones which call their callbacks with no logcontext. Returns: a Deferred which resolves once all key lookups for the given servers have completed. Follows the synapse rules of logcontext preservation. """ while True: wait_on = [ self.key_downloads[server_name] for server_name in server_names if server_name in self.key_downloads ] if wait_on: with PreserveLoggingContext(): yield defer.DeferredList(wait_on) else: break def rm(r, server_name_): self.key_downloads.pop(server_name_, None) return r for server_name, deferred in server_to_deferred.items(): self.key_downloads[server_name] = deferred deferred.addBoth(rm, server_name) def _get_server_verify_keys(self, verify_requests): """Tries to find at least one key for each verify request For each verify_request, verify_request.deferred is called back with params (server_name, key_id, VerifyKey) if a key is found, or errbacked with a SynapseError if none of the keys are found. Args: verify_requests (list[VerifyKeyRequest]): list of verify requests """ # These are functions that produce keys given a list of key ids key_fetch_fns = ( self.get_keys_from_store, # First try the local store self.get_keys_from_perspectives, # Then try via perspectives self.get_keys_from_server, # Then try directly ) @defer.inlineCallbacks def do_iterations(): with Measure(self.clock, "get_server_verify_keys"): # dict[str, dict[str, VerifyKey]]: results so far. # map server_name -> key_id -> VerifyKey merged_results = {} # dict[str, set(str)]: keys to fetch for each server missing_keys = {} for verify_request in verify_requests: missing_keys.setdefault(verify_request.server_name, set()).update( verify_request.key_ids ) for fn in key_fetch_fns: results = yield fn(missing_keys.items()) merged_results.update(results) # We now need to figure out which verify requests we have keys # for and which we don't missing_keys = {} requests_missing_keys = [] for verify_request in verify_requests: server_name = verify_request.server_name result_keys = merged_results[server_name] if verify_request.deferred.called: # We've already called this deferred, which probably # means that we've already found a key for it. continue for key_id in verify_request.key_ids: if key_id in result_keys: with PreserveLoggingContext(): verify_request.deferred.callback(( server_name, key_id, result_keys[key_id], )) break else: # The else block is only reached if the loop above # doesn't break. missing_keys.setdefault(server_name, set()).update( verify_request.key_ids ) requests_missing_keys.append(verify_request) if not missing_keys: break with PreserveLoggingContext(): for verify_request in requests_missing_keys: verify_request.deferred.errback(SynapseError( 401, "No key for %s with id %s" % ( verify_request.server_name, verify_request.key_ids, ), Codes.UNAUTHORIZED, )) def on_err(err): with PreserveLoggingContext(): for verify_request in verify_requests: if not verify_request.deferred.called: verify_request.deferred.errback(err) preserve_fn(do_iterations)().addErrback(on_err) @defer.inlineCallbacks def get_keys_from_store(self, server_name_and_key_ids): """ Args: server_name_and_key_ids (list[(str, iterable[str])]): list of (server_name, iterable[key_id]) tuples to fetch keys for Returns: Deferred: resolves to dict[str, dict[str, VerifyKey]]: map from server_name -> key_id -> VerifyKey """ res = yield logcontext.make_deferred_yieldable(defer.gatherResults( [ preserve_fn(self.store.get_server_verify_keys)( server_name, key_ids ).addCallback(lambda ks, server: (server, ks), server_name) for server_name, key_ids in server_name_and_key_ids ], consumeErrors=True, ).addErrback(unwrapFirstError)) defer.returnValue(dict(res)) @defer.inlineCallbacks def get_keys_from_perspectives(self, server_name_and_key_ids): @defer.inlineCallbacks def get_key(perspective_name, perspective_keys): try: result = yield self.get_server_verify_key_v2_indirect( server_name_and_key_ids, perspective_name, perspective_keys ) defer.returnValue(result) except Exception as e: logger.exception( "Unable to get key from %r: %s %s", perspective_name, type(e).__name__, str(e.message), ) defer.returnValue({}) results = yield logcontext.make_deferred_yieldable(defer.gatherResults( [ preserve_fn(get_key)(p_name, p_keys) for p_name, p_keys in self.perspective_servers.items() ], consumeErrors=True, ).addErrback(unwrapFirstError)) union_of_keys = {} for result in results: for server_name, keys in result.items(): union_of_keys.setdefault(server_name, {}).update(keys) defer.returnValue(union_of_keys) @defer.inlineCallbacks def get_keys_from_server(self, server_name_and_key_ids): @defer.inlineCallbacks def get_key(server_name, key_ids): keys = None try: keys = yield self.get_server_verify_key_v2_direct( server_name, key_ids ) except Exception as e: logger.info( "Unable to get key %r for %r directly: %s %s", key_ids, server_name, type(e).__name__, str(e.message), ) if not keys: keys = yield self.get_server_verify_key_v1_direct( server_name, key_ids ) keys = {server_name: keys} defer.returnValue(keys) results = yield logcontext.make_deferred_yieldable(defer.gatherResults( [ preserve_fn(get_key)(server_name, key_ids) for server_name, key_ids in server_name_and_key_ids ], consumeErrors=True, ).addErrback(unwrapFirstError)) merged = {} for result in results: merged.update(result) defer.returnValue({ server_name: keys for server_name, keys in merged.items() if keys }) @defer.inlineCallbacks def get_server_verify_key_v2_indirect(self, server_names_and_key_ids, perspective_name, perspective_keys): # TODO(mark): Set the minimum_valid_until_ts to that needed by # the events being validated or the current time if validating # an incoming request. query_response = yield self.client.post_json( destination=perspective_name, path=b"/_matrix/key/v2/query", data={ u"server_keys": { server_name: { key_id: { u"minimum_valid_until_ts": 0 } for key_id in key_ids } for server_name, key_ids in server_names_and_key_ids } }, long_retries=True, ) keys = {} responses = query_response["server_keys"] for response in responses: if (u"signatures" not in response or perspective_name not in response[u"signatures"]): raise KeyLookupError( "Key response not signed by perspective server" " %r" % (perspective_name,) ) verified = False for key_id in response[u"signatures"][perspective_name]: if key_id in perspective_keys: verify_signed_json( response, perspective_name, perspective_keys[key_id] ) verified = True if not verified: logging.info( "Response from perspective server %r not signed with a" " known key, signed with: %r, known keys: %r", perspective_name, list(response[u"signatures"][perspective_name]), list(perspective_keys) ) raise KeyLookupError( "Response not signed with a known key for perspective" " server %r" % (perspective_name,) ) processed_response = yield self.process_v2_response( perspective_name, response, only_from_server=False ) for server_name, response_keys in processed_response.items(): keys.setdefault(server_name, {}).update(response_keys) yield logcontext.make_deferred_yieldable(defer.gatherResults( [ preserve_fn(self.store_keys)( server_name=server_name, from_server=perspective_name, verify_keys=response_keys, ) for server_name, response_keys in keys.items() ], consumeErrors=True ).addErrback(unwrapFirstError)) defer.returnValue(keys) @defer.inlineCallbacks def get_server_verify_key_v2_direct(self, server_name, key_ids): keys = {} for requested_key_id in key_ids: if requested_key_id in keys: continue (response, tls_certificate) = yield fetch_server_key( server_name, self.hs.tls_server_context_factory, path=(b"/_matrix/key/v2/server/%s" % ( urllib.quote(requested_key_id), )).encode("ascii"), ) if (u"signatures" not in response or server_name not in response[u"signatures"]): raise KeyLookupError("Key response not signed by remote server") if "tls_fingerprints" not in response: raise KeyLookupError("Key response missing TLS fingerprints") certificate_bytes = crypto.dump_certificate( crypto.FILETYPE_ASN1, tls_certificate ) sha256_fingerprint = hashlib.sha256(certificate_bytes).digest() sha256_fingerprint_b64 = encode_base64(sha256_fingerprint) response_sha256_fingerprints = set() for fingerprint in response[u"tls_fingerprints"]: if u"sha256" in fingerprint: response_sha256_fingerprints.add(fingerprint[u"sha256"]) if sha256_fingerprint_b64 not in response_sha256_fingerprints: raise KeyLookupError("TLS certificate not allowed by fingerprints") response_keys = yield self.process_v2_response( from_server=server_name, requested_ids=[requested_key_id], response_json=response, ) keys.update(response_keys) yield logcontext.make_deferred_yieldable(defer.gatherResults( [ preserve_fn(self.store_keys)( server_name=key_server_name, from_server=server_name, verify_keys=verify_keys, ) for key_server_name, verify_keys in keys.items() ], consumeErrors=True ).addErrback(unwrapFirstError)) defer.returnValue(keys) @defer.inlineCallbacks def process_v2_response(self, from_server, response_json, requested_ids=[], only_from_server=True): time_now_ms = self.clock.time_msec() response_keys = {} verify_keys = {} for key_id, key_data in response_json["verify_keys"].items(): if is_signing_algorithm_supported(key_id): key_base64 = key_data["key"] key_bytes = decode_base64(key_base64) verify_key = decode_verify_key_bytes(key_id, key_bytes) verify_key.time_added = time_now_ms verify_keys[key_id] = verify_key old_verify_keys = {} for key_id, key_data in response_json["old_verify_keys"].items(): if is_signing_algorithm_supported(key_id): key_base64 = key_data["key"] key_bytes = decode_base64(key_base64) verify_key = decode_verify_key_bytes(key_id, key_bytes) verify_key.expired = key_data["expired_ts"] verify_key.time_added = time_now_ms old_verify_keys[key_id] = verify_key results = {} server_name = response_json["server_name"] if only_from_server: if server_name != from_server: raise KeyLookupError( "Expected a response for server %r not %r" % ( from_server, server_name ) ) for key_id in response_json["signatures"].get(server_name, {}): if key_id not in response_json["verify_keys"]: raise KeyLookupError( "Key response must include verification keys for all" " signatures" ) if key_id in verify_keys: verify_signed_json( response_json, server_name, verify_keys[key_id] ) signed_key_json = sign_json( response_json, self.config.server_name, self.config.signing_key[0], ) signed_key_json_bytes = encode_canonical_json(signed_key_json) ts_valid_until_ms = signed_key_json[u"valid_until_ts"] updated_key_ids = set(requested_ids) updated_key_ids.update(verify_keys) updated_key_ids.update(old_verify_keys) response_keys.update(verify_keys) response_keys.update(old_verify_keys) yield logcontext.make_deferred_yieldable(defer.gatherResults( [ preserve_fn(self.store.store_server_keys_json)( server_name=server_name, key_id=key_id, from_server=server_name, ts_now_ms=time_now_ms, ts_expires_ms=ts_valid_until_ms, key_json_bytes=signed_key_json_bytes, ) for key_id in updated_key_ids ], consumeErrors=True, ).addErrback(unwrapFirstError)) results[server_name] = response_keys defer.returnValue(results) @defer.inlineCallbacks def get_server_verify_key_v1_direct(self, server_name, key_ids): """Finds a verification key for the server with one of the key ids. Args: server_name (str): The name of the server to fetch a key for. keys_ids (list of str): The key_ids to check for. """ # Try to fetch the key from the remote server. (response, tls_certificate) = yield fetch_server_key( server_name, self.hs.tls_server_context_factory ) # Check the response. x509_certificate_bytes = crypto.dump_certificate( crypto.FILETYPE_ASN1, tls_certificate ) if ("signatures" not in response or server_name not in response["signatures"]): raise KeyLookupError("Key response not signed by remote server") if "tls_certificate" not in response: raise KeyLookupError("Key response missing TLS certificate") tls_certificate_b64 = response["tls_certificate"] if encode_base64(x509_certificate_bytes) != tls_certificate_b64: raise KeyLookupError("TLS certificate doesn't match") # Cache the result in the datastore. time_now_ms = self.clock.time_msec() verify_keys = {} for key_id, key_base64 in response["verify_keys"].items(): if is_signing_algorithm_supported(key_id): key_bytes = decode_base64(key_base64) verify_key = decode_verify_key_bytes(key_id, key_bytes) verify_key.time_added = time_now_ms verify_keys[key_id] = verify_key for key_id in response["signatures"][server_name]: if key_id not in response["verify_keys"]: raise KeyLookupError( "Key response must include verification keys for all" " signatures" ) if key_id in verify_keys: verify_signed_json( response, server_name, verify_keys[key_id] ) yield self.store.store_server_certificate( server_name, server_name, time_now_ms, tls_certificate, ) yield self.store_keys( server_name=server_name, from_server=server_name, verify_keys=verify_keys, ) defer.returnValue(verify_keys) def store_keys(self, server_name, from_server, verify_keys): """Store a collection of verify keys for a given server Args: server_name(str): The name of the server the keys are for. from_server(str): The server the keys were downloaded from. verify_keys(dict): A mapping of key_id to VerifyKey. Returns: A deferred that completes when the keys are stored. """ # TODO(markjh): Store whether the keys have expired. return logcontext.make_deferred_yieldable(defer.gatherResults( [ preserve_fn(self.store.store_server_verify_key)( server_name, server_name, key.time_added, key ) for key_id, key in verify_keys.items() ], consumeErrors=True, ).addErrback(unwrapFirstError)) @defer.inlineCallbacks def _handle_key_deferred(verify_request): server_name = verify_request.server_name try: with PreserveLoggingContext(): _, key_id, verify_key = yield verify_request.deferred except IOError as e: logger.warn( "Got IOError when downloading keys for %s: %s %s", server_name, type(e).__name__, str(e.message), ) raise SynapseError( 502, "Error downloading keys for %s" % (server_name,), Codes.UNAUTHORIZED, ) except Exception as e: logger.exception( "Got Exception when downloading keys for %s: %s %s", server_name, type(e).__name__, str(e.message), ) raise SynapseError( 401, "No key for %s with id %s" % (server_name, verify_request.key_ids), Codes.UNAUTHORIZED, ) json_object = verify_request.json_object logger.debug("Got key %s %s:%s for server %s, verifying" % ( key_id, verify_key.alg, verify_key.version, server_name, )) try: verify_signed_json(json_object, server_name, verify_key) except: raise SynapseError( 401, "Invalid signature for server %s with key %s:%s" % ( server_name, verify_key.alg, verify_key.version ), Codes.UNAUTHORIZED, ) synapse-0.24.0/synapse/event_auth.py000066400000000000000000000533641317335640100174730ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from canonicaljson import encode_canonical_json from signedjson.key import decode_verify_key_bytes from signedjson.sign import verify_signed_json, SignatureVerifyException from unpaddedbase64 import decode_base64 from synapse.api.constants import EventTypes, Membership, JoinRules from synapse.api.errors import AuthError, SynapseError, EventSizeError from synapse.types import UserID, get_domain_from_id logger = logging.getLogger(__name__) def check(event, auth_events, do_sig_check=True, do_size_check=True): """ Checks if this event is correctly authed. Args: event: the event being checked. auth_events (dict: event-key -> event): the existing room state. Returns: True if the auth checks pass. """ if do_size_check: _check_size_limits(event) if not hasattr(event, "room_id"): raise AuthError(500, "Event has no room_id: %s" % event) if do_sig_check: sender_domain = get_domain_from_id(event.sender) event_id_domain = get_domain_from_id(event.event_id) is_invite_via_3pid = ( event.type == EventTypes.Member and event.membership == Membership.INVITE and "third_party_invite" in event.content ) # Check the sender's domain has signed the event if not event.signatures.get(sender_domain): # We allow invites via 3pid to have a sender from a different # HS, as the sender must match the sender of the original # 3pid invite. This is checked further down with the # other dedicated membership checks. if not is_invite_via_3pid: raise AuthError(403, "Event not signed by sender's server") # Check the event_id's domain has signed the event if not event.signatures.get(event_id_domain): raise AuthError(403, "Event not signed by sending server") if auth_events is None: # Oh, we don't know what the state of the room was, so we # are trusting that this is allowed (at least for now) logger.warn("Trusting event: %s", event.event_id) return True if event.type == EventTypes.Create: room_id_domain = get_domain_from_id(event.room_id) if room_id_domain != sender_domain: raise AuthError( 403, "Creation event's room_id domain does not match sender's" ) # FIXME return True creation_event = auth_events.get((EventTypes.Create, ""), None) if not creation_event: raise SynapseError( 403, "Room %r does not exist" % (event.room_id,) ) creating_domain = get_domain_from_id(event.room_id) originating_domain = get_domain_from_id(event.sender) if creating_domain != originating_domain: if not _can_federate(event, auth_events): raise AuthError( 403, "This room has been marked as unfederatable." ) # FIXME: Temp hack if event.type == EventTypes.Aliases: if not event.is_state(): raise AuthError( 403, "Alias event must be a state event", ) if not event.state_key: raise AuthError( 403, "Alias event must have non-empty state_key" ) sender_domain = get_domain_from_id(event.sender) if event.state_key != sender_domain: raise AuthError( 403, "Alias event's state_key does not match sender's domain" ) return True if logger.isEnabledFor(logging.DEBUG): logger.debug( "Auth events: %s", [a.event_id for a in auth_events.values()] ) if event.type == EventTypes.Member: allowed = _is_membership_change_allowed( event, auth_events ) if allowed: logger.debug("Allowing! %s", event) else: logger.debug("Denying! %s", event) return allowed _check_event_sender_in_room(event, auth_events) # Special case to allow m.room.third_party_invite events wherever # a user is allowed to issue invites. Fixes # https://github.com/vector-im/vector-web/issues/1208 hopefully if event.type == EventTypes.ThirdPartyInvite: user_level = get_user_power_level(event.user_id, auth_events) invite_level = _get_named_level(auth_events, "invite", 0) if user_level < invite_level: raise AuthError( 403, ( "You cannot issue a third party invite for %s." % (event.content.display_name,) ) ) else: return True _can_send_event(event, auth_events) if event.type == EventTypes.PowerLevels: _check_power_levels(event, auth_events) if event.type == EventTypes.Redaction: check_redaction(event, auth_events) logger.debug("Allowing! %s", event) def _check_size_limits(event): def too_big(field): raise EventSizeError("%s too large" % (field,)) if len(event.user_id) > 255: too_big("user_id") if len(event.room_id) > 255: too_big("room_id") if event.is_state() and len(event.state_key) > 255: too_big("state_key") if len(event.type) > 255: too_big("type") if len(event.event_id) > 255: too_big("event_id") if len(encode_canonical_json(event.get_pdu_json())) > 65536: too_big("event") def _can_federate(event, auth_events): creation_event = auth_events.get((EventTypes.Create, "")) return creation_event.content.get("m.federate", True) is True def _is_membership_change_allowed(event, auth_events): membership = event.content["membership"] # Check if this is the room creator joining: if len(event.prev_events) == 1 and Membership.JOIN == membership: # Get room creation event: key = (EventTypes.Create, "", ) create = auth_events.get(key) if create and event.prev_events[0][0] == create.event_id: if create.content["creator"] == event.state_key: return True target_user_id = event.state_key creating_domain = get_domain_from_id(event.room_id) target_domain = get_domain_from_id(target_user_id) if creating_domain != target_domain: if not _can_federate(event, auth_events): raise AuthError( 403, "This room has been marked as unfederatable." ) # get info about the caller key = (EventTypes.Member, event.user_id, ) caller = auth_events.get(key) caller_in_room = caller and caller.membership == Membership.JOIN caller_invited = caller and caller.membership == Membership.INVITE # get info about the target key = (EventTypes.Member, target_user_id, ) target = auth_events.get(key) target_in_room = target and target.membership == Membership.JOIN target_banned = target and target.membership == Membership.BAN key = (EventTypes.JoinRules, "", ) join_rule_event = auth_events.get(key) if join_rule_event: join_rule = join_rule_event.content.get( "join_rule", JoinRules.INVITE ) else: join_rule = JoinRules.INVITE user_level = get_user_power_level(event.user_id, auth_events) target_level = get_user_power_level( target_user_id, auth_events ) # FIXME (erikj): What should we do here as the default? ban_level = _get_named_level(auth_events, "ban", 50) logger.debug( "_is_membership_change_allowed: %s", { "caller_in_room": caller_in_room, "caller_invited": caller_invited, "target_banned": target_banned, "target_in_room": target_in_room, "membership": membership, "join_rule": join_rule, "target_user_id": target_user_id, "event.user_id": event.user_id, } ) if Membership.INVITE == membership and "third_party_invite" in event.content: if not _verify_third_party_invite(event, auth_events): raise AuthError(403, "You are not invited to this room.") if target_banned: raise AuthError( 403, "%s is banned from the room" % (target_user_id,) ) return True if Membership.JOIN != membership: if (caller_invited and Membership.LEAVE == membership and target_user_id == event.user_id): return True if not caller_in_room: # caller isn't joined raise AuthError( 403, "%s not in room %s." % (event.user_id, event.room_id,) ) if Membership.INVITE == membership: # TODO (erikj): We should probably handle this more intelligently # PRIVATE join rules. # Invites are valid iff caller is in the room and target isn't. if target_banned: raise AuthError( 403, "%s is banned from the room" % (target_user_id,) ) elif target_in_room: # the target is already in the room. raise AuthError(403, "%s is already in the room." % target_user_id) else: invite_level = _get_named_level(auth_events, "invite", 0) if user_level < invite_level: raise AuthError( 403, "You cannot invite user %s." % target_user_id ) elif Membership.JOIN == membership: # Joins are valid iff caller == target and they were: # invited: They are accepting the invitation # joined: It's a NOOP if event.user_id != target_user_id: raise AuthError(403, "Cannot force another user to join.") elif target_banned: raise AuthError(403, "You are banned from this room") elif join_rule == JoinRules.PUBLIC: pass elif join_rule == JoinRules.INVITE: if not caller_in_room and not caller_invited: raise AuthError(403, "You are not invited to this room.") else: # TODO (erikj): may_join list # TODO (erikj): private rooms raise AuthError(403, "You are not allowed to join this room") elif Membership.LEAVE == membership: # TODO (erikj): Implement kicks. if target_banned and user_level < ban_level: raise AuthError( 403, "You cannot unban user &s." % (target_user_id,) ) elif target_user_id != event.user_id: kick_level = _get_named_level(auth_events, "kick", 50) if user_level < kick_level or user_level <= target_level: raise AuthError( 403, "You cannot kick user %s." % target_user_id ) elif Membership.BAN == membership: if user_level < ban_level or user_level <= target_level: raise AuthError(403, "You don't have permission to ban") else: raise AuthError(500, "Unknown membership %s" % membership) return True def _check_event_sender_in_room(event, auth_events): key = (EventTypes.Member, event.user_id, ) member_event = auth_events.get(key) return _check_joined_room( member_event, event.user_id, event.room_id ) def _check_joined_room(member, user_id, room_id): if not member or member.membership != Membership.JOIN: raise AuthError(403, "User %s not in room %s (%s)" % ( user_id, room_id, repr(member) )) def get_send_level(etype, state_key, auth_events): key = (EventTypes.PowerLevels, "", ) send_level_event = auth_events.get(key) send_level = None if send_level_event: send_level = send_level_event.content.get("events", {}).get( etype ) if send_level is None: if state_key is not None: send_level = send_level_event.content.get( "state_default", 50 ) else: send_level = send_level_event.content.get( "events_default", 0 ) if send_level: send_level = int(send_level) else: send_level = 0 return send_level def _can_send_event(event, auth_events): send_level = get_send_level( event.type, event.get("state_key", None), auth_events ) user_level = get_user_power_level(event.user_id, auth_events) if user_level < send_level: raise AuthError( 403, "You don't have permission to post that to the room. " + "user_level (%d) < send_level (%d)" % (user_level, send_level) ) # Check state_key if hasattr(event, "state_key"): if event.state_key.startswith("@"): if event.state_key != event.user_id: raise AuthError( 403, "You are not allowed to set others state" ) return True def check_redaction(event, auth_events): """Check whether the event sender is allowed to redact the target event. Returns: True if the the sender is allowed to redact the target event if the target event was created by them. False if the sender is allowed to redact the target event with no further checks. Raises: AuthError if the event sender is definitely not allowed to redact the target event. """ user_level = get_user_power_level(event.user_id, auth_events) redact_level = _get_named_level(auth_events, "redact", 50) if user_level >= redact_level: return False redacter_domain = get_domain_from_id(event.event_id) redactee_domain = get_domain_from_id(event.redacts) if redacter_domain == redactee_domain: return True raise AuthError( 403, "You don't have permission to redact events" ) def _check_power_levels(event, auth_events): user_list = event.content.get("users", {}) # Validate users for k, v in user_list.items(): try: UserID.from_string(k) except: raise SynapseError(400, "Not a valid user_id: %s" % (k,)) try: int(v) except: raise SynapseError(400, "Not a valid power level: %s" % (v,)) key = (event.type, event.state_key, ) current_state = auth_events.get(key) if not current_state: return user_level = get_user_power_level(event.user_id, auth_events) # Check other levels: levels_to_check = [ ("users_default", None), ("events_default", None), ("state_default", None), ("ban", None), ("redact", None), ("kick", None), ("invite", None), ] old_list = current_state.content.get("users", {}) for user in set(old_list.keys() + user_list.keys()): levels_to_check.append( (user, "users") ) old_list = current_state.content.get("events", {}) new_list = event.content.get("events", {}) for ev_id in set(old_list.keys() + new_list.keys()): levels_to_check.append( (ev_id, "events") ) old_state = current_state.content new_state = event.content for level_to_check, dir in levels_to_check: old_loc = old_state new_loc = new_state if dir: old_loc = old_loc.get(dir, {}) new_loc = new_loc.get(dir, {}) if level_to_check in old_loc: old_level = int(old_loc[level_to_check]) else: old_level = None if level_to_check in new_loc: new_level = int(new_loc[level_to_check]) else: new_level = None if new_level is not None and old_level is not None: if new_level == old_level: continue if dir == "users" and level_to_check != event.user_id: if old_level == user_level: raise AuthError( 403, "You don't have permission to remove ops level equal " "to your own" ) if old_level > user_level or new_level > user_level: raise AuthError( 403, "You don't have permission to add ops level greater " "than your own" ) def _get_power_level_event(auth_events): key = (EventTypes.PowerLevels, "", ) return auth_events.get(key) def get_user_power_level(user_id, auth_events): power_level_event = _get_power_level_event(auth_events) if power_level_event: level = power_level_event.content.get("users", {}).get(user_id) if not level: level = power_level_event.content.get("users_default", 0) if level is None: return 0 else: return int(level) else: key = (EventTypes.Create, "", ) create_event = auth_events.get(key) if (create_event is not None and create_event.content["creator"] == user_id): return 100 else: return 0 def _get_named_level(auth_events, name, default): power_level_event = _get_power_level_event(auth_events) if not power_level_event: return default level = power_level_event.content.get(name, None) if level is not None: return int(level) else: return default def _verify_third_party_invite(event, auth_events): """ Validates that the invite event is authorized by a previous third-party invite. Checks that the public key, and keyserver, match those in the third party invite, and that the invite event has a signature issued using that public key. Args: event: The m.room.member join event being validated. auth_events: All relevant previous context events which may be used for authorization decisions. Return: True if the event fulfills the expectations of a previous third party invite event. """ if "third_party_invite" not in event.content: return False if "signed" not in event.content["third_party_invite"]: return False signed = event.content["third_party_invite"]["signed"] for key in {"mxid", "token"}: if key not in signed: return False token = signed["token"] invite_event = auth_events.get( (EventTypes.ThirdPartyInvite, token,) ) if not invite_event: return False if invite_event.sender != event.sender: return False if event.user_id != invite_event.user_id: return False if signed["mxid"] != event.state_key: return False if signed["token"] != token: return False for public_key_object in get_public_keys(invite_event): public_key = public_key_object["public_key"] try: for server, signature_block in signed["signatures"].items(): for key_name, encoded_signature in signature_block.items(): if not key_name.startswith("ed25519:"): continue verify_key = decode_verify_key_bytes( key_name, decode_base64(public_key) ) verify_signed_json(signed, server, verify_key) # We got the public key from the invite, so we know that the # correct server signed the signed bundle. # The caller is responsible for checking that the signing # server has not revoked that public key. return True except (KeyError, SignatureVerifyException,): continue return False def get_public_keys(invite_event): public_keys = [] if "public_key" in invite_event.content: o = { "public_key": invite_event.content["public_key"], } if "key_validity_url" in invite_event.content: o["key_validity_url"] = invite_event.content["key_validity_url"] public_keys.append(o) public_keys.extend(invite_event.content.get("public_keys", [])) return public_keys def auth_types_for_event(event): """Given an event, return a list of (EventType, StateKey) that may be needed to auth the event. The returned list may be a superset of what would actually be required depending on the full state of the room. Used to limit the number of events to fetch from the database to actually auth the event. """ if event.type == EventTypes.Create: return [] auth_types = [] auth_types.append((EventTypes.PowerLevels, "", )) auth_types.append((EventTypes.Member, event.user_id, )) auth_types.append((EventTypes.Create, "", )) if event.type == EventTypes.Member: membership = event.content["membership"] if membership in [Membership.JOIN, Membership.INVITE]: auth_types.append((EventTypes.JoinRules, "", )) auth_types.append((EventTypes.Member, event.state_key, )) if membership == Membership.INVITE: if "third_party_invite" in event.content: key = ( EventTypes.ThirdPartyInvite, event.content["third_party_invite"]["signed"]["token"] ) auth_types.append(key) return auth_types synapse-0.24.0/synapse/events/000077500000000000000000000000001317335640100162505ustar00rootroot00000000000000synapse-0.24.0/synapse/events/__init__.py000066400000000000000000000134651317335640100203720ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.util.frozenutils import freeze from synapse.util.caches import intern_dict # Whether we should use frozen_dict in FrozenEvent. Using frozen_dicts prevents # bugs where we accidentally share e.g. signature dicts. However, converting # a dict to frozen_dicts is expensive. USE_FROZEN_DICTS = True class _EventInternalMetadata(object): def __init__(self, internal_metadata_dict): self.__dict__ = dict(internal_metadata_dict) def get_dict(self): return dict(self.__dict__) def is_outlier(self): return getattr(self, "outlier", False) def is_invite_from_remote(self): return getattr(self, "invite_from_remote", False) def get_send_on_behalf_of(self): """Whether this server should send the event on behalf of another server. This is used by the federation "send_join" API to forward the initial join event for a server in the room. returns a str with the name of the server this event is sent on behalf of. """ return getattr(self, "send_on_behalf_of", None) def _event_dict_property(key): def getter(self): return self._event_dict[key] def setter(self, v): self._event_dict[key] = v def delete(self): del self._event_dict[key] return property( getter, setter, delete, ) class EventBase(object): def __init__(self, event_dict, signatures={}, unsigned={}, internal_metadata_dict={}, rejected_reason=None): self.signatures = signatures self.unsigned = unsigned self.rejected_reason = rejected_reason self._event_dict = event_dict self.internal_metadata = _EventInternalMetadata( internal_metadata_dict ) auth_events = _event_dict_property("auth_events") depth = _event_dict_property("depth") content = _event_dict_property("content") hashes = _event_dict_property("hashes") origin = _event_dict_property("origin") origin_server_ts = _event_dict_property("origin_server_ts") prev_events = _event_dict_property("prev_events") prev_state = _event_dict_property("prev_state") redacts = _event_dict_property("redacts") room_id = _event_dict_property("room_id") sender = _event_dict_property("sender") user_id = _event_dict_property("sender") @property def membership(self): return self.content["membership"] def is_state(self): return hasattr(self, "state_key") and self.state_key is not None def get_dict(self): d = dict(self._event_dict) d.update({ "signatures": self.signatures, "unsigned": dict(self.unsigned), }) return d def get(self, key, default=None): return self._event_dict.get(key, default) def get_internal_metadata_dict(self): return self.internal_metadata.get_dict() def get_pdu_json(self, time_now=None): pdu_json = self.get_dict() if time_now is not None and "age_ts" in pdu_json["unsigned"]: age = time_now - pdu_json["unsigned"]["age_ts"] pdu_json.setdefault("unsigned", {})["age"] = int(age) del pdu_json["unsigned"]["age_ts"] # This may be a frozen event pdu_json["unsigned"].pop("redacted_because", None) return pdu_json def __set__(self, instance, value): raise AttributeError("Unrecognized attribute %s" % (instance,)) def __getitem__(self, field): return self._event_dict[field] def __contains__(self, field): return field in self._event_dict def items(self): return self._event_dict.items() class FrozenEvent(EventBase): def __init__(self, event_dict, internal_metadata_dict={}, rejected_reason=None): event_dict = dict(event_dict) # Signatures is a dict of dicts, and this is faster than doing a # copy.deepcopy signatures = { name: {sig_id: sig for sig_id, sig in sigs.items()} for name, sigs in event_dict.pop("signatures", {}).items() } unsigned = dict(event_dict.pop("unsigned", {})) # We intern these strings because they turn up a lot (especially when # caching). event_dict = intern_dict(event_dict) if USE_FROZEN_DICTS: frozen_dict = freeze(event_dict) else: frozen_dict = event_dict self.event_id = event_dict["event_id"] self.type = event_dict["type"] if "state_key" in event_dict: self.state_key = event_dict["state_key"] super(FrozenEvent, self).__init__( frozen_dict, signatures=signatures, unsigned=unsigned, internal_metadata_dict=internal_metadata_dict, rejected_reason=rejected_reason, ) @staticmethod def from_event(event): e = FrozenEvent( event.get_pdu_json() ) e.internal_metadata = event.internal_metadata return e def __str__(self): return self.__repr__() def __repr__(self): return "" % ( self.get("event_id", None), self.get("type", None), self.get("state_key", None), ) synapse-0.24.0/synapse/events/builder.py000066400000000000000000000044671317335640100202630ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from . import EventBase, FrozenEvent, _event_dict_property from synapse.types import EventID from synapse.util.stringutils import random_string import copy class EventBuilder(EventBase): def __init__(self, key_values={}, internal_metadata_dict={}): signatures = copy.deepcopy(key_values.pop("signatures", {})) unsigned = copy.deepcopy(key_values.pop("unsigned", {})) super(EventBuilder, self).__init__( key_values, signatures=signatures, unsigned=unsigned, internal_metadata_dict=internal_metadata_dict, ) event_id = _event_dict_property("event_id") state_key = _event_dict_property("state_key") type = _event_dict_property("type") def build(self): return FrozenEvent.from_event(self) class EventBuilderFactory(object): def __init__(self, clock, hostname): self.clock = clock self.hostname = hostname self.event_id_count = 0 def create_event_id(self): i = str(self.event_id_count) self.event_id_count += 1 local_part = str(int(self.clock.time())) + i + random_string(5) e_id = EventID.create(local_part, self.hostname) return e_id.to_string() def new(self, key_values={}): key_values["event_id"] = self.create_event_id() time_now = int(self.clock.time_msec()) key_values.setdefault("origin", self.hostname) key_values.setdefault("origin_server_ts", time_now) key_values.setdefault("unsigned", {}) age = key_values["unsigned"].pop("age", 0) key_values["unsigned"].setdefault("age_ts", time_now - age) key_values["signatures"] = {} return EventBuilder(key_values=key_values,) synapse-0.24.0/synapse/events/snapshot.py000066400000000000000000000044361317335640100204700ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. class EventContext(object): """ Attributes: current_state_ids (dict[(str, str), str]): The current state map including the current event. (type, state_key) -> event_id prev_state_ids (dict[(str, str), str]): The current state map excluding the current event. (type, state_key) -> event_id state_group (int): state group id rejected (bool|str): A rejection reason if the event was rejected, else False push_actions (list[(str, list[object])]): list of (user_id, actions) tuples prev_group (int): Previously persisted state group. ``None`` for an outlier. delta_ids (dict[(str, str), str]): Delta from ``prev_group``. (type, state_key) -> event_id. ``None`` for an outlier. prev_state_events (?): XXX: is this ever set to anything other than the empty list? """ __slots__ = [ "current_state_ids", "prev_state_ids", "state_group", "rejected", "push_actions", "prev_group", "delta_ids", "prev_state_events", "app_service", ] def __init__(self): # The current state including the current event self.current_state_ids = None # The current state excluding the current event self.prev_state_ids = None self.state_group = None self.rejected = False self.push_actions = [] # A previously persisted state group and a delta between that # and this state. self.prev_group = None self.delta_ids = None self.prev_state_events = None self.app_service = None synapse-0.24.0/synapse/events/spamcheck.py000066400000000000000000000071011317335640100205570ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 New Vector Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. class SpamChecker(object): def __init__(self, hs): self.spam_checker = None module = None config = None try: module, config = hs.config.spam_checker except: pass if module is not None: self.spam_checker = module(config=config) def check_event_for_spam(self, event): """Checks if a given event is considered "spammy" by this server. If the server considers an event spammy, then it will be rejected if sent by a local user. If it is sent by a user on another server, then users receive a blank event. Args: event (synapse.events.EventBase): the event to be checked Returns: bool: True if the event is spammy. """ if self.spam_checker is None: return False return self.spam_checker.check_event_for_spam(event) def user_may_invite(self, inviter_userid, invitee_userid, room_id): """Checks if a given user may send an invite If this method returns false, the invite will be rejected. Args: userid (string): The sender's user ID Returns: bool: True if the user may send an invite, otherwise False """ if self.spam_checker is None: return True return self.spam_checker.user_may_invite(inviter_userid, invitee_userid, room_id) def user_may_create_room(self, userid): """Checks if a given user may create a room If this method returns false, the creation request will be rejected. Args: userid (string): The sender's user ID Returns: bool: True if the user may create a room, otherwise False """ if self.spam_checker is None: return True return self.spam_checker.user_may_create_room(userid) def user_may_create_room_alias(self, userid, room_alias): """Checks if a given user may create a room alias If this method returns false, the association request will be rejected. Args: userid (string): The sender's user ID room_alias (string): The alias to be created Returns: bool: True if the user may create a room alias, otherwise False """ if self.spam_checker is None: return True return self.spam_checker.user_may_create_room_alias(userid, room_alias) def user_may_publish_room(self, userid, room_id): """Checks if a given user may publish a room to the directory If this method returns false, the publish request will be rejected. Args: userid (string): The sender's user ID room_id (string): The ID of the room that would be published Returns: bool: True if the user may publish the room, otherwise False """ if self.spam_checker is None: return True return self.spam_checker.user_may_publish_room(userid, room_id) synapse-0.24.0/synapse/events/utils.py000066400000000000000000000212461317335640100177670ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.api.constants import EventTypes from . import EventBase from frozendict import frozendict import re # Split strings on "." but not "\." This uses a negative lookbehind assertion for '\' # (?): List of keys to drill down to in 'src'. """ if len(field) == 0: # this should be impossible return if len(field) == 1: # common case e.g. 'origin_server_ts' if field[0] in src: dst[field[0]] = src[field[0]] return # Else is a nested field e.g. 'content.body' # Pop the last field as that's the key to move across and we need the # parent dict in order to access the data. Drill down to the right dict. key_to_move = field.pop(-1) sub_dict = src for sub_field in field: # e.g. sub_field => "content" if sub_field in sub_dict and type(sub_dict[sub_field]) in [dict, frozendict]: sub_dict = sub_dict[sub_field] else: return if key_to_move not in sub_dict: return # Insert the key into the output dictionary, creating nested objects # as required. We couldn't do this any earlier or else we'd need to delete # the empty objects if the key didn't exist. sub_out_dict = dst for sub_field in field: sub_out_dict = sub_out_dict.setdefault(sub_field, {}) sub_out_dict[key_to_move] = sub_dict[key_to_move] def only_fields(dictionary, fields): """Return a new dict with only the fields in 'dictionary' which are present in 'fields'. If there are no event fields specified then all fields are included. The entries may include '.' charaters to indicate sub-fields. So ['content.body'] will include the 'body' field of the 'content' object. A literal '.' character in a field name may be escaped using a '\'. Args: dictionary(dict): The dictionary to read from. fields(list): A list of fields to copy over. Only shallow refs are taken. Returns: dict: A new dictionary with only the given fields. If fields was empty, the same dictionary is returned. """ if len(fields) == 0: return dictionary # for each field, convert it: # ["content.body.thing\.with\.dots"] => [["content", "body", "thing\.with\.dots"]] split_fields = [SPLIT_FIELD_REGEX.split(f) for f in fields] # for each element of the output array of arrays: # remove escaping so we can use the right key names. split_fields[:] = [ [f.replace(r'\.', r'.') for f in field_array] for field_array in split_fields ] output = {} for field_array in split_fields: _copy_field(dictionary, output, field_array) return output def format_event_raw(d): return d def format_event_for_client_v1(d): d = format_event_for_client_v2(d) sender = d.get("sender") if sender is not None: d["user_id"] = sender copy_keys = ( "age", "redacted_because", "replaces_state", "prev_content", "invite_room_state", ) for key in copy_keys: if key in d["unsigned"]: d[key] = d["unsigned"][key] return d def format_event_for_client_v2(d): drop_keys = ( "auth_events", "prev_events", "hashes", "signatures", "depth", "origin", "prev_state", ) for key in drop_keys: d.pop(key, None) return d def format_event_for_client_v2_without_room_id(d): d = format_event_for_client_v2(d) d.pop("room_id", None) return d def serialize_event(e, time_now_ms, as_client_event=True, event_format=format_event_for_client_v1, token_id=None, only_event_fields=None, is_invite=False): """Serialize event for clients Args: e (EventBase) time_now_ms (int) as_client_event (bool) event_format token_id only_event_fields is_invite (bool): Whether this is an invite that is being sent to the invitee Returns: dict """ # FIXME(erikj): To handle the case of presence events and the like if not isinstance(e, EventBase): return e time_now_ms = int(time_now_ms) # Should this strip out None's? d = {k: v for k, v in e.get_dict().items()} if "age_ts" in d["unsigned"]: d["unsigned"]["age"] = time_now_ms - d["unsigned"]["age_ts"] del d["unsigned"]["age_ts"] if "redacted_because" in e.unsigned: d["unsigned"]["redacted_because"] = serialize_event( e.unsigned["redacted_because"], time_now_ms, event_format=event_format ) if token_id is not None: if token_id == getattr(e.internal_metadata, "token_id", None): txn_id = getattr(e.internal_metadata, "txn_id", None) if txn_id is not None: d["unsigned"]["transaction_id"] = txn_id # If this is an invite for somebody else, then we don't care about the # invite_room_state as that's meant solely for the invitee. Other clients # will already have the state since they're in the room. if not is_invite: d["unsigned"].pop("invite_room_state", None) if as_client_event: d = event_format(d) if only_event_fields: if (not isinstance(only_event_fields, list) or not all(isinstance(f, basestring) for f in only_event_fields)): raise TypeError("only_event_fields must be a list of strings") d = only_fields(d, only_event_fields) return d synapse-0.24.0/synapse/events/validator.py000066400000000000000000000055401317335640100206130ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.types import EventID, RoomID, UserID from synapse.api.errors import SynapseError from synapse.api.constants import EventTypes, Membership class EventValidator(object): def validate(self, event): EventID.from_string(event.event_id) RoomID.from_string(event.room_id) required = [ # "auth_events", "content", # "hashes", "origin", # "prev_events", "sender", "type", ] for k in required: if not hasattr(event, k): raise SynapseError(400, "Event does not have key %s" % (k,)) # Check that the following keys have string values strings = [ "origin", "sender", "type", ] if hasattr(event, "state_key"): strings.append("state_key") for s in strings: if not isinstance(getattr(event, s), basestring): raise SynapseError(400, "Not '%s' a string type" % (s,)) if event.type == EventTypes.Member: if "membership" not in event.content: raise SynapseError(400, "Content has not membership key") if event.content["membership"] not in Membership.LIST: raise SynapseError(400, "Invalid membership key") # Check that the following keys have dictionary values # TODO # Check that the following keys have the correct format for DAGs # TODO def validate_new(self, event): self.validate(event) UserID.from_string(event.sender) if event.type == EventTypes.Message: strings = [ "body", "msgtype", ] self._ensure_strings(event.content, strings) elif event.type == EventTypes.Topic: self._ensure_strings(event.content, ["topic"]) elif event.type == EventTypes.Name: self._ensure_strings(event.content, ["name"]) def _ensure_strings(self, d, keys): for s in keys: if s not in d: raise SynapseError(400, "'%s' not in content" % (s,)) if not isinstance(d[s], basestring): raise SynapseError(400, "Not '%s' a string type" % (s,)) synapse-0.24.0/synapse/federation/000077500000000000000000000000001317335640100170645ustar00rootroot00000000000000synapse-0.24.0/synapse/federation/__init__.py000066400000000000000000000015241317335640100211770ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This package includes all the federation specific logic. """ from .replication import ReplicationLayer def initialize_http_replication(hs): transport = hs.get_federation_transport_client() return ReplicationLayer(hs, transport) synapse-0.24.0/synapse/federation/federation_base.py000066400000000000000000000132171317335640100225540ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from synapse.api.errors import SynapseError from synapse.crypto.event_signing import check_event_content_hash from synapse.events.utils import prune_event from synapse.util import unwrapFirstError, logcontext from twisted.internet import defer logger = logging.getLogger(__name__) class FederationBase(object): def __init__(self, hs): self.spam_checker = hs.get_spam_checker() @defer.inlineCallbacks def _check_sigs_and_hash_and_fetch(self, origin, pdus, outlier=False, include_none=False): """Takes a list of PDUs and checks the signatures and hashs of each one. If a PDU fails its signature check then we check if we have it in the database and if not then request if from the originating server of that PDU. If a PDU fails its content hash check then it is redacted. The given list of PDUs are not modified, instead the function returns a new list. Args: pdu (list) outlier (bool) Returns: Deferred : A list of PDUs that have valid signatures and hashes. """ deferreds = self._check_sigs_and_hashes(pdus) @defer.inlineCallbacks def handle_check_result(pdu, deferred): try: res = yield logcontext.make_deferred_yieldable(deferred) except SynapseError: res = None if not res: # Check local db. res = yield self.store.get_event( pdu.event_id, allow_rejected=True, allow_none=True, ) if not res and pdu.origin != origin: try: res = yield self.get_pdu( destinations=[pdu.origin], event_id=pdu.event_id, outlier=outlier, timeout=10000, ) except SynapseError: pass if not res: logger.warn( "Failed to find copy of %s with valid signature", pdu.event_id, ) defer.returnValue(res) handle = logcontext.preserve_fn(handle_check_result) deferreds2 = [ handle(pdu, deferred) for pdu, deferred in zip(pdus, deferreds) ] valid_pdus = yield logcontext.make_deferred_yieldable( defer.gatherResults( deferreds2, consumeErrors=True, ) ).addErrback(unwrapFirstError) if include_none: defer.returnValue(valid_pdus) else: defer.returnValue([p for p in valid_pdus if p]) def _check_sigs_and_hash(self, pdu): return logcontext.make_deferred_yieldable( self._check_sigs_and_hashes([pdu])[0], ) def _check_sigs_and_hashes(self, pdus): """Checks that each of the received events is correctly signed by the sending server. Args: pdus (list[FrozenEvent]): the events to be checked Returns: list[Deferred]: for each input event, a deferred which: * returns the original event if the checks pass * returns a redacted version of the event (if the signature matched but the hash did not) * throws a SynapseError if the signature check failed. The deferreds run their callbacks in the sentinel logcontext. """ redacted_pdus = [ prune_event(pdu) for pdu in pdus ] deferreds = self.keyring.verify_json_objects_for_server([ (p.origin, p.get_pdu_json()) for p in redacted_pdus ]) ctx = logcontext.LoggingContext.current_context() def callback(_, pdu, redacted): with logcontext.PreserveLoggingContext(ctx): if not check_event_content_hash(pdu): logger.warn( "Event content has been tampered, redacting %s: %s", pdu.event_id, pdu.get_pdu_json() ) return redacted if self.spam_checker.check_event_for_spam(pdu): logger.warn( "Event contains spam, redacting %s: %s", pdu.event_id, pdu.get_pdu_json() ) return redacted return pdu def errback(failure, pdu): failure.trap(SynapseError) with logcontext.PreserveLoggingContext(ctx): logger.warn( "Signature check failed for %s", pdu.event_id, ) return failure for deferred, pdu, redacted in zip(deferreds, pdus, redacted_pdus): deferred.addCallbacks( callback, errback, callbackArgs=[pdu, redacted], errbackArgs=[pdu], ) return deferreds synapse-0.24.0/synapse/federation/federation_client.py000066400000000000000000000705711317335640100231260ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from .federation_base import FederationBase from synapse.api.constants import Membership from synapse.api.errors import ( CodeMessageException, HttpResponseException, SynapseError, ) from synapse.util import unwrapFirstError, logcontext from synapse.util.caches.expiringcache import ExpiringCache from synapse.util.logutils import log_function from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred from synapse.events import FrozenEvent, builder import synapse.metrics from synapse.util.retryutils import NotRetryingDestination import copy import itertools import logging import random logger = logging.getLogger(__name__) # synapse.federation.federation_client is a silly name metrics = synapse.metrics.get_metrics_for("synapse.federation.client") sent_queries_counter = metrics.register_counter("sent_queries", labels=["type"]) PDU_RETRY_TIME_MS = 1 * 60 * 1000 class FederationClient(FederationBase): def __init__(self, hs): super(FederationClient, self).__init__(hs) self.pdu_destination_tried = {} self._clock.looping_call( self._clear_tried_cache, 60 * 1000, ) self.state = hs.get_state_handler() def _clear_tried_cache(self): """Clear pdu_destination_tried cache""" now = self._clock.time_msec() old_dict = self.pdu_destination_tried self.pdu_destination_tried = {} for event_id, destination_dict in old_dict.items(): destination_dict = { dest: time for dest, time in destination_dict.items() if time + PDU_RETRY_TIME_MS > now } if destination_dict: self.pdu_destination_tried[event_id] = destination_dict def start_get_pdu_cache(self): self._get_pdu_cache = ExpiringCache( cache_name="get_pdu_cache", clock=self._clock, max_len=1000, expiry_ms=120 * 1000, reset_expiry_on_get=False, ) self._get_pdu_cache.start() @log_function def make_query(self, destination, query_type, args, retry_on_dns_fail=False, ignore_backoff=False): """Sends a federation Query to a remote homeserver of the given type and arguments. Args: destination (str): Domain name of the remote homeserver query_type (str): Category of the query type; should match the handler name used in register_query_handler(). args (dict): Mapping of strings to strings containing the details of the query request. ignore_backoff (bool): true to ignore the historical backoff data and try the request anyway. Returns: a Deferred which will eventually yield a JSON object from the response """ sent_queries_counter.inc(query_type) return self.transport_layer.make_query( destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail, ignore_backoff=ignore_backoff, ) @log_function def query_client_keys(self, destination, content, timeout): """Query device keys for a device hosted on a remote server. Args: destination (str): Domain name of the remote homeserver content (dict): The query content. Returns: a Deferred which will eventually yield a JSON object from the response """ sent_queries_counter.inc("client_device_keys") return self.transport_layer.query_client_keys( destination, content, timeout ) @log_function def query_user_devices(self, destination, user_id, timeout=30000): """Query the device keys for a list of user ids hosted on a remote server. """ sent_queries_counter.inc("user_devices") return self.transport_layer.query_user_devices( destination, user_id, timeout ) @log_function def claim_client_keys(self, destination, content, timeout): """Claims one-time keys for a device hosted on a remote server. Args: destination (str): Domain name of the remote homeserver content (dict): The query content. Returns: a Deferred which will eventually yield a JSON object from the response """ sent_queries_counter.inc("client_one_time_keys") return self.transport_layer.claim_client_keys( destination, content, timeout ) @defer.inlineCallbacks @log_function def backfill(self, dest, context, limit, extremities): """Requests some more historic PDUs for the given context from the given destination server. Args: dest (str): The remote home server to ask. context (str): The context to backfill. limit (int): The maximum number of PDUs to return. extremities (list): List of PDU id and origins of the first pdus we have seen from the context Returns: Deferred: Results in the received PDUs. """ logger.debug("backfill extrem=%s", extremities) # If there are no extremeties then we've (probably) reached the start. if not extremities: return transaction_data = yield self.transport_layer.backfill( dest, context, extremities, limit) logger.debug("backfill transaction_data=%s", repr(transaction_data)) pdus = [ self.event_from_pdu_json(p, outlier=False) for p in transaction_data["pdus"] ] # FIXME: We should handle signature failures more gracefully. pdus[:] = yield logcontext.make_deferred_yieldable(defer.gatherResults( self._check_sigs_and_hashes(pdus), consumeErrors=True, ).addErrback(unwrapFirstError)) defer.returnValue(pdus) @defer.inlineCallbacks @log_function def get_pdu(self, destinations, event_id, outlier=False, timeout=None): """Requests the PDU with given origin and ID from the remote home servers. Will attempt to get the PDU from each destination in the list until one succeeds. This will persist the PDU locally upon receipt. Args: destinations (list): Which home servers to query event_id (str): event to fetch outlier (bool): Indicates whether the PDU is an `outlier`, i.e. if it's from an arbitary point in the context as opposed to part of the current block of PDUs. Defaults to `False` timeout (int): How long to try (in ms) each destination for before moving to the next destination. None indicates no timeout. Returns: Deferred: Results in the requested PDU. """ # TODO: Rate limit the number of times we try and get the same event. if self._get_pdu_cache: ev = self._get_pdu_cache.get(event_id) if ev: defer.returnValue(ev) pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {}) signed_pdu = None for destination in destinations: now = self._clock.time_msec() last_attempt = pdu_attempts.get(destination, 0) if last_attempt + PDU_RETRY_TIME_MS > now: continue try: transaction_data = yield self.transport_layer.get_event( destination, event_id, timeout=timeout, ) logger.debug("transaction_data %r", transaction_data) pdu_list = [ self.event_from_pdu_json(p, outlier=outlier) for p in transaction_data["pdus"] ] if pdu_list and pdu_list[0]: pdu = pdu_list[0] # Check signatures are correct. signed_pdu = yield self._check_sigs_and_hash(pdu) break pdu_attempts[destination] = now except SynapseError as e: logger.info( "Failed to get PDU %s from %s because %s", event_id, destination, e, ) except NotRetryingDestination as e: logger.info(e.message) continue except Exception as e: pdu_attempts[destination] = now logger.info( "Failed to get PDU %s from %s because %s", event_id, destination, e, ) continue if self._get_pdu_cache is not None and signed_pdu: self._get_pdu_cache[event_id] = signed_pdu defer.returnValue(signed_pdu) @defer.inlineCallbacks @log_function def get_state_for_room(self, destination, room_id, event_id): """Requests all of the `current` state PDUs for a given room from a remote home server. Args: destination (str): The remote homeserver to query for the state. room_id (str): The id of the room we're interested in. event_id (str): The id of the event we want the state at. Returns: Deferred: Results in a list of PDUs. """ try: # First we try and ask for just the IDs, as thats far quicker if # we have most of the state and auth_chain already. # However, this may 404 if the other side has an old synapse. result = yield self.transport_layer.get_room_state_ids( destination, room_id, event_id=event_id, ) state_event_ids = result["pdu_ids"] auth_event_ids = result.get("auth_chain_ids", []) fetched_events, failed_to_fetch = yield self.get_events( [destination], room_id, set(state_event_ids + auth_event_ids) ) if failed_to_fetch: logger.warn("Failed to get %r", failed_to_fetch) event_map = { ev.event_id: ev for ev in fetched_events } pdus = [event_map[e_id] for e_id in state_event_ids if e_id in event_map] auth_chain = [ event_map[e_id] for e_id in auth_event_ids if e_id in event_map ] auth_chain.sort(key=lambda e: e.depth) defer.returnValue((pdus, auth_chain)) except HttpResponseException as e: if e.code == 400 or e.code == 404: logger.info("Failed to use get_room_state_ids API, falling back") else: raise e result = yield self.transport_layer.get_room_state( destination, room_id, event_id=event_id, ) pdus = [ self.event_from_pdu_json(p, outlier=True) for p in result["pdus"] ] auth_chain = [ self.event_from_pdu_json(p, outlier=True) for p in result.get("auth_chain", []) ] seen_events = yield self.store.get_events([ ev.event_id for ev in itertools.chain(pdus, auth_chain) ]) signed_pdus = yield self._check_sigs_and_hash_and_fetch( destination, [p for p in pdus if p.event_id not in seen_events], outlier=True ) signed_pdus.extend( seen_events[p.event_id] for p in pdus if p.event_id in seen_events ) signed_auth = yield self._check_sigs_and_hash_and_fetch( destination, [p for p in auth_chain if p.event_id not in seen_events], outlier=True ) signed_auth.extend( seen_events[p.event_id] for p in auth_chain if p.event_id in seen_events ) signed_auth.sort(key=lambda e: e.depth) defer.returnValue((signed_pdus, signed_auth)) @defer.inlineCallbacks def get_events(self, destinations, room_id, event_ids, return_local=True): """Fetch events from some remote destinations, checking if we already have them. Args: destinations (list) room_id (str) event_ids (list) return_local (bool): Whether to include events we already have in the DB in the returned list of events Returns: Deferred: A deferred resolving to a 2-tuple where the first is a list of events and the second is a list of event ids that we failed to fetch. """ if return_local: seen_events = yield self.store.get_events(event_ids, allow_rejected=True) signed_events = seen_events.values() else: seen_events = yield self.store.have_events(event_ids) signed_events = [] failed_to_fetch = set() missing_events = set(event_ids) for k in seen_events: missing_events.discard(k) if not missing_events: defer.returnValue((signed_events, failed_to_fetch)) def random_server_list(): srvs = list(destinations) random.shuffle(srvs) return srvs batch_size = 20 missing_events = list(missing_events) for i in xrange(0, len(missing_events), batch_size): batch = set(missing_events[i:i + batch_size]) deferreds = [ preserve_fn(self.get_pdu)( destinations=random_server_list(), event_id=e_id, ) for e_id in batch ] res = yield preserve_context_over_deferred( defer.DeferredList(deferreds, consumeErrors=True) ) for success, result in res: if success and result: signed_events.append(result) batch.discard(result.event_id) # We removed all events we successfully fetched from `batch` failed_to_fetch.update(batch) defer.returnValue((signed_events, failed_to_fetch)) @defer.inlineCallbacks @log_function def get_event_auth(self, destination, room_id, event_id): res = yield self.transport_layer.get_event_auth( destination, room_id, event_id, ) auth_chain = [ self.event_from_pdu_json(p, outlier=True) for p in res["auth_chain"] ] signed_auth = yield self._check_sigs_and_hash_and_fetch( destination, auth_chain, outlier=True ) signed_auth.sort(key=lambda e: e.depth) defer.returnValue(signed_auth) @defer.inlineCallbacks def make_membership_event(self, destinations, room_id, user_id, membership, content={},): """ Creates an m.room.member event, with context, without participating in the room. Does so by asking one of the already participating servers to create an event with proper context. Note that this does not append any events to any graphs. Args: destinations (str): Candidate homeservers which are probably participating in the room. room_id (str): The room in which the event will happen. user_id (str): The user whose membership is being evented. membership (str): The "membership" property of the event. Must be one of "join" or "leave". content (object): Any additional data to put into the content field of the event. Return: Deferred: resolves to a tuple of (origin (str), event (object)) where origin is the remote homeserver which generated the event. Fails with a ``CodeMessageException`` if the chosen remote server returns a 300/400 code. Fails with a ``RuntimeError`` if no servers were reachable. """ valid_memberships = {Membership.JOIN, Membership.LEAVE} if membership not in valid_memberships: raise RuntimeError( "make_membership_event called with membership='%s', must be one of %s" % (membership, ",".join(valid_memberships)) ) for destination in destinations: if destination == self.server_name: continue try: ret = yield self.transport_layer.make_membership_event( destination, room_id, user_id, membership ) pdu_dict = ret["event"] logger.debug("Got response to make_%s: %s", membership, pdu_dict) pdu_dict["content"].update(content) # The protoevent received over the JSON wire may not have all # the required fields. Lets just gloss over that because # there's some we never care about if "prev_state" not in pdu_dict: pdu_dict["prev_state"] = [] ev = builder.EventBuilder(pdu_dict) defer.returnValue( (destination, ev) ) break except CodeMessageException as e: if not 500 <= e.code < 600: raise else: logger.warn( "Failed to make_%s via %s: %s", membership, destination, e.message ) except Exception as e: logger.warn( "Failed to make_%s via %s: %s", membership, destination, e.message ) raise RuntimeError("Failed to send to any server.") @defer.inlineCallbacks def send_join(self, destinations, pdu): """Sends a join event to one of a list of homeservers. Doing so will cause the remote server to add the event to the graph, and send the event out to the rest of the federation. Args: destinations (str): Candidate homeservers which are probably participating in the room. pdu (BaseEvent): event to be sent Return: Deferred: resolves to a dict with members ``origin`` (a string giving the serer the event was sent to, ``state`` (?) and ``auth_chain``. Fails with a ``CodeMessageException`` if the chosen remote server returns a 300/400 code. Fails with a ``RuntimeError`` if no servers were reachable. """ for destination in destinations: if destination == self.server_name: continue try: time_now = self._clock.time_msec() _, content = yield self.transport_layer.send_join( destination=destination, room_id=pdu.room_id, event_id=pdu.event_id, content=pdu.get_pdu_json(time_now), ) logger.debug("Got content: %s", content) state = [ self.event_from_pdu_json(p, outlier=True) for p in content.get("state", []) ] auth_chain = [ self.event_from_pdu_json(p, outlier=True) for p in content.get("auth_chain", []) ] pdus = { p.event_id: p for p in itertools.chain(state, auth_chain) } valid_pdus = yield self._check_sigs_and_hash_and_fetch( destination, pdus.values(), outlier=True, ) valid_pdus_map = { p.event_id: p for p in valid_pdus } # NB: We *need* to copy to ensure that we don't have multiple # references being passed on, as that causes... issues. signed_state = [ copy.copy(valid_pdus_map[p.event_id]) for p in state if p.event_id in valid_pdus_map ] signed_auth = [ valid_pdus_map[p.event_id] for p in auth_chain if p.event_id in valid_pdus_map ] # NB: We *need* to copy to ensure that we don't have multiple # references being passed on, as that causes... issues. for s in signed_state: s.internal_metadata = copy.deepcopy(s.internal_metadata) auth_chain.sort(key=lambda e: e.depth) defer.returnValue({ "state": signed_state, "auth_chain": signed_auth, "origin": destination, }) except CodeMessageException as e: if not 500 <= e.code < 600: raise else: logger.exception( "Failed to send_join via %s: %s", destination, e.message ) except Exception as e: logger.exception( "Failed to send_join via %s: %s", destination, e.message ) raise RuntimeError("Failed to send to any server.") @defer.inlineCallbacks def send_invite(self, destination, room_id, event_id, pdu): time_now = self._clock.time_msec() code, content = yield self.transport_layer.send_invite( destination=destination, room_id=room_id, event_id=event_id, content=pdu.get_pdu_json(time_now), ) pdu_dict = content["event"] logger.debug("Got response to send_invite: %s", pdu_dict) pdu = self.event_from_pdu_json(pdu_dict) # Check signatures are correct. pdu = yield self._check_sigs_and_hash(pdu) # FIXME: We should handle signature failures more gracefully. defer.returnValue(pdu) @defer.inlineCallbacks def send_leave(self, destinations, pdu): """Sends a leave event to one of a list of homeservers. Doing so will cause the remote server to add the event to the graph, and send the event out to the rest of the federation. This is mostly useful to reject received invites. Args: destinations (str): Candidate homeservers which are probably participating in the room. pdu (BaseEvent): event to be sent Return: Deferred: resolves to None. Fails with a ``CodeMessageException`` if the chosen remote server returns a non-200 code. Fails with a ``RuntimeError`` if no servers were reachable. """ for destination in destinations: if destination == self.server_name: continue try: time_now = self._clock.time_msec() _, content = yield self.transport_layer.send_leave( destination=destination, room_id=pdu.room_id, event_id=pdu.event_id, content=pdu.get_pdu_json(time_now), ) logger.debug("Got content: %s", content) defer.returnValue(None) except CodeMessageException: raise except Exception as e: logger.exception( "Failed to send_leave via %s: %s", destination, e.message ) raise RuntimeError("Failed to send to any server.") def get_public_rooms(self, destination, limit=None, since_token=None, search_filter=None, include_all_networks=False, third_party_instance_id=None): if destination == self.server_name: return return self.transport_layer.get_public_rooms( destination, limit, since_token, search_filter, include_all_networks=include_all_networks, third_party_instance_id=third_party_instance_id, ) @defer.inlineCallbacks def query_auth(self, destination, room_id, event_id, local_auth): """ Params: destination (str) event_it (str) local_auth (list) """ time_now = self._clock.time_msec() send_content = { "auth_chain": [e.get_pdu_json(time_now) for e in local_auth], } code, content = yield self.transport_layer.send_query_auth( destination=destination, room_id=room_id, event_id=event_id, content=send_content, ) auth_chain = [ self.event_from_pdu_json(e) for e in content["auth_chain"] ] signed_auth = yield self._check_sigs_and_hash_and_fetch( destination, auth_chain, outlier=True ) signed_auth.sort(key=lambda e: e.depth) ret = { "auth_chain": signed_auth, "rejects": content.get("rejects", []), "missing": content.get("missing", []), } defer.returnValue(ret) @defer.inlineCallbacks def get_missing_events(self, destination, room_id, earliest_events_ids, latest_events, limit, min_depth, timeout): """Tries to fetch events we are missing. This is called when we receive an event without having received all of its ancestors. Args: destination (str) room_id (str) earliest_events_ids (list): List of event ids. Effectively the events we expected to receive, but haven't. `get_missing_events` should only return events that didn't happen before these. latest_events (list): List of events we have received that we don't have all previous events for. limit (int): Maximum number of events to return. min_depth (int): Minimum depth of events tor return. timeout (int): Max time to wait in ms """ try: content = yield self.transport_layer.get_missing_events( destination=destination, room_id=room_id, earliest_events=earliest_events_ids, latest_events=[e.event_id for e in latest_events], limit=limit, min_depth=min_depth, timeout=timeout, ) events = [ self.event_from_pdu_json(e) for e in content.get("events", []) ] signed_events = yield self._check_sigs_and_hash_and_fetch( destination, events, outlier=False ) except HttpResponseException as e: if not e.code == 400: raise # We are probably hitting an old server that doesn't support # get_missing_events signed_events = [] defer.returnValue(signed_events) def event_from_pdu_json(self, pdu_json, outlier=False): event = FrozenEvent( pdu_json ) event.internal_metadata.outlier = outlier return event @defer.inlineCallbacks def forward_third_party_invite(self, destinations, room_id, event_dict): for destination in destinations: if destination == self.server_name: continue try: yield self.transport_layer.exchange_third_party_invite( destination=destination, room_id=room_id, event_dict=event_dict, ) defer.returnValue(None) except CodeMessageException: raise except Exception as e: logger.exception( "Failed to send_third_party_invite via %s: %s", destination, e.message ) raise RuntimeError("Failed to send to any server.") synapse-0.24.0/synapse/federation/federation_server.py000066400000000000000000000531511317335640100231510ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from .federation_base import FederationBase from .units import Transaction, Edu from synapse.util import async from synapse.util.logutils import log_function from synapse.util.caches.response_cache import ResponseCache from synapse.events import FrozenEvent from synapse.types import get_domain_from_id import synapse.metrics from synapse.api.errors import AuthError, FederationError, SynapseError from synapse.crypto.event_signing import compute_event_signature import simplejson as json import logging # when processing incoming transactions, we try to handle multiple rooms in # parallel, up to this limit. TRANSACTION_CONCURRENCY_LIMIT = 10 logger = logging.getLogger(__name__) # synapse.federation.federation_server is a silly name metrics = synapse.metrics.get_metrics_for("synapse.federation.server") received_pdus_counter = metrics.register_counter("received_pdus") received_edus_counter = metrics.register_counter("received_edus") received_queries_counter = metrics.register_counter("received_queries", labels=["type"]) class FederationServer(FederationBase): def __init__(self, hs): super(FederationServer, self).__init__(hs) self.auth = hs.get_auth() self._server_linearizer = async.Linearizer("fed_server") self._transaction_linearizer = async.Linearizer("fed_txn_handler") # We cache responses to state queries, as they take a while and often # come in waves. self._state_resp_cache = ResponseCache(hs, timeout_ms=30000) def set_handler(self, handler): """Sets the handler that the replication layer will use to communicate receipt of new PDUs from other home servers. The required methods are documented on :py:class:`.ReplicationHandler`. """ self.handler = handler def register_edu_handler(self, edu_type, handler): if edu_type in self.edu_handlers: raise KeyError("Already have an EDU handler for %s" % (edu_type,)) self.edu_handlers[edu_type] = handler def register_query_handler(self, query_type, handler): """Sets the handler callable that will be used to handle an incoming federation Query of the given type. Args: query_type (str): Category name of the query, which should match the string used by make_query. handler (callable): Invoked to handle incoming queries of this type handler is invoked as: result = handler(args) where 'args' is a dict mapping strings to strings of the query arguments. It should return a Deferred that will eventually yield an object to encode as JSON. """ if query_type in self.query_handlers: raise KeyError( "Already have a Query handler for %s" % (query_type,) ) self.query_handlers[query_type] = handler @defer.inlineCallbacks @log_function def on_backfill_request(self, origin, room_id, versions, limit): with (yield self._server_linearizer.queue((origin, room_id))): pdus = yield self.handler.on_backfill_request( origin, room_id, versions, limit ) res = self._transaction_from_pdus(pdus).get_dict() defer.returnValue((200, res)) @defer.inlineCallbacks @log_function def on_incoming_transaction(self, transaction_data): # keep this as early as possible to make the calculated origin ts as # accurate as possible. request_time = self._clock.time_msec() transaction = Transaction(**transaction_data) if not transaction.transaction_id: raise Exception("Transaction missing transaction_id") if not transaction.origin: raise Exception("Transaction missing origin") logger.debug("[%s] Got transaction", transaction.transaction_id) # use a linearizer to ensure that we don't process the same transaction # multiple times in parallel. with (yield self._transaction_linearizer.queue( (transaction.origin, transaction.transaction_id), )): result = yield self._handle_incoming_transaction( transaction, request_time, ) defer.returnValue(result) @defer.inlineCallbacks def _handle_incoming_transaction(self, transaction, request_time): """ Process an incoming transaction and return the HTTP response Args: transaction (Transaction): incoming transaction request_time (int): timestamp that the HTTP request arrived at Returns: Deferred[(int, object)]: http response code and body """ response = yield self.transaction_actions.have_responded(transaction) if response: logger.debug( "[%s] We've already responded to this request", transaction.transaction_id ) defer.returnValue(response) return logger.debug("[%s] Transaction is new", transaction.transaction_id) received_pdus_counter.inc_by(len(transaction.pdus)) pdus_by_room = {} for p in transaction.pdus: if "unsigned" in p: unsigned = p["unsigned"] if "age" in unsigned: p["age"] = unsigned["age"] if "age" in p: p["age_ts"] = request_time - int(p["age"]) del p["age"] event = self.event_from_pdu_json(p) room_id = event.room_id pdus_by_room.setdefault(room_id, []).append(event) pdu_results = {} # we can process different rooms in parallel (which is useful if they # require callouts to other servers to fetch missing events), but # impose a limit to avoid going too crazy with ram/cpu. @defer.inlineCallbacks def process_pdus_for_room(room_id): logger.debug("Processing PDUs for %s", room_id) for pdu in pdus_by_room[room_id]: event_id = pdu.event_id try: yield self._handle_received_pdu( transaction.origin, pdu ) pdu_results[event_id] = {} except FederationError as e: logger.warn("Error handling PDU %s: %s", event_id, e) pdu_results[event_id] = {"error": str(e)} except Exception as e: pdu_results[event_id] = {"error": str(e)} logger.exception("Failed to handle PDU %s", event_id) yield async.concurrently_execute( process_pdus_for_room, pdus_by_room.keys(), TRANSACTION_CONCURRENCY_LIMIT, ) if hasattr(transaction, "edus"): for edu in (Edu(**x) for x in transaction.edus): yield self.received_edu( transaction.origin, edu.edu_type, edu.content ) pdu_failures = getattr(transaction, "pdu_failures", []) for failure in pdu_failures: logger.info("Got failure %r", failure) response = { "pdus": pdu_results, } logger.debug("Returning: %s", str(response)) yield self.transaction_actions.set_response( transaction, 200, response ) defer.returnValue((200, response)) @defer.inlineCallbacks def received_edu(self, origin, edu_type, content): received_edus_counter.inc() if edu_type in self.edu_handlers: try: yield self.edu_handlers[edu_type](origin, content) except SynapseError as e: logger.info("Failed to handle edu %r: %r", edu_type, e) except Exception as e: logger.exception("Failed to handle edu %r", edu_type) else: logger.warn("Received EDU of type %s with no handler", edu_type) @defer.inlineCallbacks @log_function def on_context_state_request(self, origin, room_id, event_id): if not event_id: raise NotImplementedError("Specify an event") in_room = yield self.auth.check_host_in_room(room_id, origin) if not in_room: raise AuthError(403, "Host not in room.") result = self._state_resp_cache.get((room_id, event_id)) if not result: with (yield self._server_linearizer.queue((origin, room_id))): resp = yield self._state_resp_cache.set( (room_id, event_id), self._on_context_state_request_compute(room_id, event_id) ) else: resp = yield result defer.returnValue((200, resp)) @defer.inlineCallbacks def on_state_ids_request(self, origin, room_id, event_id): if not event_id: raise NotImplementedError("Specify an event") in_room = yield self.auth.check_host_in_room(room_id, origin) if not in_room: raise AuthError(403, "Host not in room.") state_ids = yield self.handler.get_state_ids_for_pdu( room_id, event_id, ) auth_chain_ids = yield self.store.get_auth_chain_ids(state_ids) defer.returnValue((200, { "pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids, })) @defer.inlineCallbacks def _on_context_state_request_compute(self, room_id, event_id): pdus = yield self.handler.get_state_for_pdu( room_id, event_id, ) auth_chain = yield self.store.get_auth_chain( [pdu.event_id for pdu in pdus] ) for event in auth_chain: # We sign these again because there was a bug where we # incorrectly signed things the first time round if self.hs.is_mine_id(event.event_id): event.signatures.update( compute_event_signature( event, self.hs.hostname, self.hs.config.signing_key[0] ) ) defer.returnValue({ "pdus": [pdu.get_pdu_json() for pdu in pdus], "auth_chain": [pdu.get_pdu_json() for pdu in auth_chain], }) @defer.inlineCallbacks @log_function def on_pdu_request(self, origin, event_id): pdu = yield self._get_persisted_pdu(origin, event_id) if pdu: defer.returnValue( (200, self._transaction_from_pdus([pdu]).get_dict()) ) else: defer.returnValue((404, "")) @defer.inlineCallbacks @log_function def on_pull_request(self, origin, versions): raise NotImplementedError("Pull transactions not implemented") @defer.inlineCallbacks def on_query_request(self, query_type, args): received_queries_counter.inc(query_type) if query_type in self.query_handlers: response = yield self.query_handlers[query_type](args) defer.returnValue((200, response)) else: defer.returnValue( (404, "No handler for Query type '%s'" % (query_type,)) ) @defer.inlineCallbacks def on_make_join_request(self, room_id, user_id): pdu = yield self.handler.on_make_join_request(room_id, user_id) time_now = self._clock.time_msec() defer.returnValue({"event": pdu.get_pdu_json(time_now)}) @defer.inlineCallbacks def on_invite_request(self, origin, content): pdu = self.event_from_pdu_json(content) ret_pdu = yield self.handler.on_invite_request(origin, pdu) time_now = self._clock.time_msec() defer.returnValue((200, {"event": ret_pdu.get_pdu_json(time_now)})) @defer.inlineCallbacks def on_send_join_request(self, origin, content): logger.debug("on_send_join_request: content: %s", content) pdu = self.event_from_pdu_json(content) logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures) res_pdus = yield self.handler.on_send_join_request(origin, pdu) time_now = self._clock.time_msec() defer.returnValue((200, { "state": [p.get_pdu_json(time_now) for p in res_pdus["state"]], "auth_chain": [ p.get_pdu_json(time_now) for p in res_pdus["auth_chain"] ], })) @defer.inlineCallbacks def on_make_leave_request(self, room_id, user_id): pdu = yield self.handler.on_make_leave_request(room_id, user_id) time_now = self._clock.time_msec() defer.returnValue({"event": pdu.get_pdu_json(time_now)}) @defer.inlineCallbacks def on_send_leave_request(self, origin, content): logger.debug("on_send_leave_request: content: %s", content) pdu = self.event_from_pdu_json(content) logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures) yield self.handler.on_send_leave_request(origin, pdu) defer.returnValue((200, {})) @defer.inlineCallbacks def on_event_auth(self, origin, room_id, event_id): with (yield self._server_linearizer.queue((origin, room_id))): time_now = self._clock.time_msec() auth_pdus = yield self.handler.on_event_auth(event_id) res = { "auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus], } defer.returnValue((200, res)) @defer.inlineCallbacks def on_query_auth_request(self, origin, content, room_id, event_id): """ Content is a dict with keys:: auth_chain (list): A list of events that give the auth chain. missing (list): A list of event_ids indicating what the other side (`origin`) think we're missing. rejects (dict): A mapping from event_id to a 2-tuple of reason string and a proof (or None) of why the event was rejected. The keys of this dict give the list of events the `origin` has rejected. Args: origin (str) content (dict) event_id (str) Returns: Deferred: Results in `dict` with the same format as `content` """ with (yield self._server_linearizer.queue((origin, room_id))): auth_chain = [ self.event_from_pdu_json(e) for e in content["auth_chain"] ] signed_auth = yield self._check_sigs_and_hash_and_fetch( origin, auth_chain, outlier=True ) ret = yield self.handler.on_query_auth( origin, event_id, signed_auth, content.get("rejects", []), content.get("missing", []), ) time_now = self._clock.time_msec() send_content = { "auth_chain": [ e.get_pdu_json(time_now) for e in ret["auth_chain"] ], "rejects": ret.get("rejects", []), "missing": ret.get("missing", []), } defer.returnValue( (200, send_content) ) @log_function def on_query_client_keys(self, origin, content): return self.on_query_request("client_keys", content) def on_query_user_devices(self, origin, user_id): return self.on_query_request("user_devices", user_id) @defer.inlineCallbacks @log_function def on_claim_client_keys(self, origin, content): query = [] for user_id, device_keys in content.get("one_time_keys", {}).items(): for device_id, algorithm in device_keys.items(): query.append((user_id, device_id, algorithm)) results = yield self.store.claim_e2e_one_time_keys(query) json_result = {} for user_id, device_keys in results.items(): for device_id, keys in device_keys.items(): for key_id, json_bytes in keys.items(): json_result.setdefault(user_id, {})[device_id] = { key_id: json.loads(json_bytes) } logger.info( "Claimed one-time-keys: %s", ",".join(( "%s for %s:%s" % (key_id, user_id, device_id) for user_id, user_keys in json_result.iteritems() for device_id, device_keys in user_keys.iteritems() for key_id, _ in device_keys.iteritems() )), ) defer.returnValue({"one_time_keys": json_result}) @defer.inlineCallbacks @log_function def on_get_missing_events(self, origin, room_id, earliest_events, latest_events, limit, min_depth): with (yield self._server_linearizer.queue((origin, room_id))): logger.info( "on_get_missing_events: earliest_events: %r, latest_events: %r," " limit: %d, min_depth: %d", earliest_events, latest_events, limit, min_depth ) missing_events = yield self.handler.on_get_missing_events( origin, room_id, earliest_events, latest_events, limit, min_depth ) if len(missing_events) < 5: logger.info( "Returning %d events: %r", len(missing_events), missing_events ) else: logger.info("Returning %d events", len(missing_events)) time_now = self._clock.time_msec() defer.returnValue({ "events": [ev.get_pdu_json(time_now) for ev in missing_events], }) @log_function def on_openid_userinfo(self, token): ts_now_ms = self._clock.time_msec() return self.store.get_user_id_for_open_id_token(token, ts_now_ms) @log_function def _get_persisted_pdu(self, origin, event_id, do_auth=True): """ Get a PDU from the database with given origin and id. Returns: Deferred: Results in a `Pdu`. """ return self.handler.get_persisted_pdu( origin, event_id, do_auth=do_auth ) def _transaction_from_pdus(self, pdu_list): """Returns a new Transaction containing the given PDUs suitable for transmission. """ time_now = self._clock.time_msec() pdus = [p.get_pdu_json(time_now) for p in pdu_list] return Transaction( origin=self.server_name, pdus=pdus, origin_server_ts=int(time_now), destination=None, ) @defer.inlineCallbacks def _handle_received_pdu(self, origin, pdu): """ Process a PDU received in a federation /send/ transaction. Args: origin (str): server which sent the pdu pdu (FrozenEvent): received pdu Returns (Deferred): completes with None Raises: FederationError if the signatures / hash do not match """ # check that it's actually being sent from a valid destination to # workaround bug #1753 in 0.18.5 and 0.18.6 if origin != get_domain_from_id(pdu.event_id): # We continue to accept join events from any server; this is # necessary for the federation join dance to work correctly. # (When we join over federation, the "helper" server is # responsible for sending out the join event, rather than the # origin. See bug #1893). if not ( pdu.type == 'm.room.member' and pdu.content and pdu.content.get("membership", None) == 'join' ): logger.info( "Discarding PDU %s from invalid origin %s", pdu.event_id, origin ) return else: logger.info( "Accepting join PDU %s from %s", pdu.event_id, origin ) # Check signature. try: pdu = yield self._check_sigs_and_hash(pdu) except SynapseError as e: raise FederationError( "ERROR", e.code, e.msg, affected=pdu.event_id, ) yield self.handler.on_receive_pdu(origin, pdu, get_missing=True) def __str__(self): return "" % self.server_name def event_from_pdu_json(self, pdu_json, outlier=False): event = FrozenEvent( pdu_json ) event.internal_metadata.outlier = outlier return event @defer.inlineCallbacks def exchange_third_party_invite( self, sender_user_id, target_user_id, room_id, signed, ): ret = yield self.handler.exchange_third_party_invite( sender_user_id, target_user_id, room_id, signed, ) defer.returnValue(ret) @defer.inlineCallbacks def on_exchange_third_party_invite_request(self, origin, room_id, event_dict): ret = yield self.handler.on_exchange_third_party_invite_request( origin, room_id, event_dict ) defer.returnValue(ret) synapse-0.24.0/synapse/federation/persistence.py000066400000000000000000000062141317335640100217650ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This module contains all the persistence actions done by the federation package. These actions are mostly only used by the :py:mod:`.replication` module. """ from twisted.internet import defer from synapse.util.logutils import log_function import logging logger = logging.getLogger(__name__) class TransactionActions(object): """ Defines persistence actions that relate to handling Transactions. """ def __init__(self, datastore): self.store = datastore @log_function def have_responded(self, transaction): """ Have we already responded to a transaction with the same id and origin? Returns: Deferred: Results in `None` if we have not previously responded to this transaction or a 2-tuple of `(int, dict)` representing the response code and response body. """ if not transaction.transaction_id: raise RuntimeError("Cannot persist a transaction with no " "transaction_id") return self.store.get_received_txn_response( transaction.transaction_id, transaction.origin ) @log_function def set_response(self, transaction, code, response): """ Persist how we responded to a transaction. Returns: Deferred """ if not transaction.transaction_id: raise RuntimeError("Cannot persist a transaction with no " "transaction_id") return self.store.set_received_txn_response( transaction.transaction_id, transaction.origin, code, response, ) @defer.inlineCallbacks @log_function def prepare_to_send(self, transaction): """ Persists the `Transaction` we are about to send and works out the correct value for the `prev_ids` key. Returns: Deferred """ transaction.prev_ids = yield self.store.prep_send_transaction( transaction.transaction_id, transaction.destination, transaction.origin_server_ts, ) @log_function def delivered(self, transaction, response_code, response_dict): """ Marks the given `Transaction` as having been successfully delivered to the remote homeserver, and what the response was. Returns: Deferred """ return self.store.delivered_txn( transaction.transaction_id, transaction.destination, response_code, response_dict, ) synapse-0.24.0/synapse/federation/replication.py000066400000000000000000000043771317335640100217620ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This layer is responsible for replicating with remote home servers using a given transport. """ from .federation_client import FederationClient from .federation_server import FederationServer from .persistence import TransactionActions import logging logger = logging.getLogger(__name__) class ReplicationLayer(FederationClient, FederationServer): """This layer is responsible for replicating with remote home servers over the given transport. I.e., does the sending and receiving of PDUs to remote home servers. The layer communicates with the rest of the server via a registered ReplicationHandler. In more detail, the layer: * Receives incoming data and processes it into transactions and pdus. * Fetches any PDUs it thinks it might have missed. * Keeps the current state for contexts up to date by applying the suitable conflict resolution. * Sends outgoing pdus wrapped in transactions. * Fills out the references to previous pdus/transactions appropriately for outgoing data. """ def __init__(self, hs, transport_layer): self.server_name = hs.hostname self.keyring = hs.get_keyring() self.transport_layer = transport_layer self.federation_client = self self.store = hs.get_datastore() self.handler = None self.edu_handlers = {} self.query_handlers = {} self._clock = hs.get_clock() self.transaction_actions = TransactionActions(self.store) self.hs = hs super(ReplicationLayer, self).__init__(hs) def __str__(self): return "" % self.server_name synapse-0.24.0/synapse/federation/send_queue.py000066400000000000000000000416321317335640100216010ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """A federation sender that forwards things to be sent across replication to a worker process. It assumes there is a single worker process feeding off of it. Each row in the replication stream consists of a type and some json, where the types indicate whether they are presence, or edus, etc. Ephemeral or non-event data are queued up in-memory. When the worker requests updates since a particular point, all in-memory data since before that point is dropped. We also expire things in the queue after 5 minutes, to ensure that a dead worker doesn't cause the queues to grow limitlessly. Events are replicated via a separate events stream. """ from .units import Edu from synapse.storage.presence import UserPresenceState from synapse.util.metrics import Measure import synapse.metrics from blist import sorteddict from collections import namedtuple import logging logger = logging.getLogger(__name__) metrics = synapse.metrics.get_metrics_for(__name__) class FederationRemoteSendQueue(object): """A drop in replacement for TransactionQueue""" def __init__(self, hs): self.server_name = hs.hostname self.clock = hs.get_clock() self.notifier = hs.get_notifier() self.is_mine_id = hs.is_mine_id self.presence_map = {} # Pending presence map user_id -> UserPresenceState self.presence_changed = sorteddict() # Stream position -> user_id self.keyed_edu = {} # (destination, key) -> EDU self.keyed_edu_changed = sorteddict() # stream position -> (destination, key) self.edus = sorteddict() # stream position -> Edu self.failures = sorteddict() # stream position -> (destination, Failure) self.device_messages = sorteddict() # stream position -> destination self.pos = 1 self.pos_time = sorteddict() # EVERYTHING IS SAD. In particular, python only makes new scopes when # we make a new function, so we need to make a new function so the inner # lambda binds to the queue rather than to the name of the queue which # changes. ARGH. def register(name, queue): metrics.register_callback( queue_name + "_size", lambda: len(queue), ) for queue_name in [ "presence_map", "presence_changed", "keyed_edu", "keyed_edu_changed", "edus", "failures", "device_messages", "pos_time", ]: register(queue_name, getattr(self, queue_name)) self.clock.looping_call(self._clear_queue, 30 * 1000) def _next_pos(self): pos = self.pos self.pos += 1 self.pos_time[self.clock.time_msec()] = pos return pos def _clear_queue(self): """Clear the queues for anything older than N minutes""" FIVE_MINUTES_AGO = 5 * 60 * 1000 now = self.clock.time_msec() keys = self.pos_time.keys() time = keys.bisect_left(now - FIVE_MINUTES_AGO) if not keys[:time]: return position_to_delete = max(keys[:time]) for key in keys[:time]: del self.pos_time[key] self._clear_queue_before_pos(position_to_delete) def _clear_queue_before_pos(self, position_to_delete): """Clear all the queues from before a given position""" with Measure(self.clock, "send_queue._clear"): # Delete things out of presence maps keys = self.presence_changed.keys() i = keys.bisect_left(position_to_delete) for key in keys[:i]: del self.presence_changed[key] user_ids = set( user_id for uids in self.presence_changed.itervalues() for user_id in uids ) to_del = [ user_id for user_id in self.presence_map if user_id not in user_ids ] for user_id in to_del: del self.presence_map[user_id] # Delete things out of keyed edus keys = self.keyed_edu_changed.keys() i = keys.bisect_left(position_to_delete) for key in keys[:i]: del self.keyed_edu_changed[key] live_keys = set() for edu_key in self.keyed_edu_changed.values(): live_keys.add(edu_key) to_del = [edu_key for edu_key in self.keyed_edu if edu_key not in live_keys] for edu_key in to_del: del self.keyed_edu[edu_key] # Delete things out of edu map keys = self.edus.keys() i = keys.bisect_left(position_to_delete) for key in keys[:i]: del self.edus[key] # Delete things out of failure map keys = self.failures.keys() i = keys.bisect_left(position_to_delete) for key in keys[:i]: del self.failures[key] # Delete things out of device map keys = self.device_messages.keys() i = keys.bisect_left(position_to_delete) for key in keys[:i]: del self.device_messages[key] def notify_new_events(self, current_id): """As per TransactionQueue""" # We don't need to replicate this as it gets sent down a different # stream. pass def send_edu(self, destination, edu_type, content, key=None): """As per TransactionQueue""" pos = self._next_pos() edu = Edu( origin=self.server_name, destination=destination, edu_type=edu_type, content=content, ) if key: assert isinstance(key, tuple) self.keyed_edu[(destination, key)] = edu self.keyed_edu_changed[pos] = (destination, key) else: self.edus[pos] = edu self.notifier.on_new_replication_data() def send_presence(self, states): """As per TransactionQueue Args: states (list(UserPresenceState)) """ pos = self._next_pos() # We only want to send presence for our own users, so lets always just # filter here just in case. local_states = filter(lambda s: self.is_mine_id(s.user_id), states) self.presence_map.update({state.user_id: state for state in local_states}) self.presence_changed[pos] = [state.user_id for state in local_states] self.notifier.on_new_replication_data() def send_failure(self, failure, destination): """As per TransactionQueue""" pos = self._next_pos() self.failures[pos] = (destination, str(failure)) self.notifier.on_new_replication_data() def send_device_messages(self, destination): """As per TransactionQueue""" pos = self._next_pos() self.device_messages[pos] = destination self.notifier.on_new_replication_data() def get_current_token(self): return self.pos - 1 def federation_ack(self, token): self._clear_queue_before_pos(token) def get_replication_rows(self, from_token, to_token, limit, federation_ack=None): """Get rows to be sent over federation between the two tokens Args: from_token (int) to_token(int) limit (int) federation_ack (int): Optional. The position where the worker is explicitly acknowledged it has handled. Allows us to drop data from before that point """ # TODO: Handle limit. # To handle restarts where we wrap around if from_token > self.pos: from_token = -1 # list of tuple(int, BaseFederationRow), where the first is the position # of the federation stream. rows = [] # There should be only one reader, so lets delete everything its # acknowledged its seen. if federation_ack: self._clear_queue_before_pos(federation_ack) # Fetch changed presence keys = self.presence_changed.keys() i = keys.bisect_right(from_token) j = keys.bisect_right(to_token) + 1 dest_user_ids = [ (pos, user_id) for pos in keys[i:j] for user_id in self.presence_changed[pos] ] for (key, user_id) in dest_user_ids: rows.append((key, PresenceRow( state=self.presence_map[user_id], ))) # Fetch changes keyed edus keys = self.keyed_edu_changed.keys() i = keys.bisect_right(from_token) j = keys.bisect_right(to_token) + 1 # We purposefully clobber based on the key here, python dict comprehensions # always use the last value, so this will correctly point to the last # stream position. keyed_edus = {self.keyed_edu_changed[k]: k for k in keys[i:j]} for ((destination, edu_key), pos) in keyed_edus.iteritems(): rows.append((pos, KeyedEduRow( key=edu_key, edu=self.keyed_edu[(destination, edu_key)], ))) # Fetch changed edus keys = self.edus.keys() i = keys.bisect_right(from_token) j = keys.bisect_right(to_token) + 1 edus = ((k, self.edus[k]) for k in keys[i:j]) for (pos, edu) in edus: rows.append((pos, EduRow(edu))) # Fetch changed failures keys = self.failures.keys() i = keys.bisect_right(from_token) j = keys.bisect_right(to_token) + 1 failures = ((k, self.failures[k]) for k in keys[i:j]) for (pos, (destination, failure)) in failures: rows.append((pos, FailureRow( destination=destination, failure=failure, ))) # Fetch changed device messages keys = self.device_messages.keys() i = keys.bisect_right(from_token) j = keys.bisect_right(to_token) + 1 device_messages = {self.device_messages[k]: k for k in keys[i:j]} for (destination, pos) in device_messages.iteritems(): rows.append((pos, DeviceRow( destination=destination, ))) # Sort rows based on pos rows.sort() return [(pos, row.TypeId, row.to_data()) for pos, row in rows] class BaseFederationRow(object): """Base class for rows to be sent in the federation stream. Specifies how to identify, serialize and deserialize the different types. """ TypeId = None # Unique string that ids the type. Must be overriden in sub classes. @staticmethod def from_data(data): """Parse the data from the federation stream into a row. Args: data: The value of ``data`` from FederationStreamRow.data, type depends on the type of stream """ raise NotImplementedError() def to_data(self): """Serialize this row to be sent over the federation stream. Returns: The value to be sent in FederationStreamRow.data. The type depends on the type of stream. """ raise NotImplementedError() def add_to_buffer(self, buff): """Add this row to the appropriate field in the buffer ready for this to be sent over federation. We use a buffer so that we can batch up events that have come in at the same time and send them all at once. Args: buff (BufferedToSend) """ raise NotImplementedError() class PresenceRow(BaseFederationRow, namedtuple("PresenceRow", ( "state", # UserPresenceState ))): TypeId = "p" @staticmethod def from_data(data): return PresenceRow( state=UserPresenceState.from_dict(data) ) def to_data(self): return self.state.as_dict() def add_to_buffer(self, buff): buff.presence.append(self.state) class KeyedEduRow(BaseFederationRow, namedtuple("KeyedEduRow", ( "key", # tuple(str) - the edu key passed to send_edu "edu", # Edu ))): """Streams EDUs that have an associated key that is ued to clobber. For example, typing EDUs clobber based on room_id. """ TypeId = "k" @staticmethod def from_data(data): return KeyedEduRow( key=tuple(data["key"]), edu=Edu(**data["edu"]), ) def to_data(self): return { "key": self.key, "edu": self.edu.get_internal_dict(), } def add_to_buffer(self, buff): buff.keyed_edus.setdefault( self.edu.destination, {} )[self.key] = self.edu class EduRow(BaseFederationRow, namedtuple("EduRow", ( "edu", # Edu ))): """Streams EDUs that don't have keys. See KeyedEduRow """ TypeId = "e" @staticmethod def from_data(data): return EduRow(Edu(**data)) def to_data(self): return self.edu.get_internal_dict() def add_to_buffer(self, buff): buff.edus.setdefault(self.edu.destination, []).append(self.edu) class FailureRow(BaseFederationRow, namedtuple("FailureRow", ( "destination", # str "failure", ))): """Streams failures to a remote server. Failures are issued when there was something wrong with a transaction the remote sent us, e.g. it included an event that was invalid. """ TypeId = "f" @staticmethod def from_data(data): return FailureRow( destination=data["destination"], failure=data["failure"], ) def to_data(self): return { "destination": self.destination, "failure": self.failure, } def add_to_buffer(self, buff): buff.failures.setdefault(self.destination, []).append(self.failure) class DeviceRow(BaseFederationRow, namedtuple("DeviceRow", ( "destination", # str ))): """Streams the fact that either a) there is pending to device messages for users on the remote, or b) a local users device has changed and needs to be sent to the remote. """ TypeId = "d" @staticmethod def from_data(data): return DeviceRow(destination=data["destination"]) def to_data(self): return {"destination": self.destination} def add_to_buffer(self, buff): buff.device_destinations.add(self.destination) TypeToRow = { Row.TypeId: Row for Row in ( PresenceRow, KeyedEduRow, EduRow, FailureRow, DeviceRow, ) } ParsedFederationStreamData = namedtuple("ParsedFederationStreamData", ( "presence", # list(UserPresenceState) "keyed_edus", # dict of destination -> { key -> Edu } "edus", # dict of destination -> [Edu] "failures", # dict of destination -> [failures] "device_destinations", # set of destinations )) def process_rows_for_federation(transaction_queue, rows): """Parse a list of rows from the federation stream and put them in the transaction queue ready for sending to the relevant homeservers. Args: transaction_queue (TransactionQueue) rows (list(synapse.replication.tcp.streams.FederationStreamRow)) """ # The federation stream contains a bunch of different types of # rows that need to be handled differently. We parse the rows, put # them into the appropriate collection and then send them off. buff = ParsedFederationStreamData( presence=[], keyed_edus={}, edus={}, failures={}, device_destinations=set(), ) # Parse the rows in the stream and add to the buffer for row in rows: if row.type not in TypeToRow: logger.error("Unrecognized federation row type %r", row.type) continue RowType = TypeToRow[row.type] parsed_row = RowType.from_data(row.data) parsed_row.add_to_buffer(buff) if buff.presence: transaction_queue.send_presence(buff.presence) for destination, edu_map in buff.keyed_edus.iteritems(): for key, edu in edu_map.items(): transaction_queue.send_edu( edu.destination, edu.edu_type, edu.content, key=key, ) for destination, edu_list in buff.edus.iteritems(): for edu in edu_list: transaction_queue.send_edu( edu.destination, edu.edu_type, edu.content, key=None, ) for destination, failure_list in buff.failures.iteritems(): for failure in failure_list: transaction_queue.send_failure(destination, failure) for destination in buff.device_destinations: transaction_queue.send_device_messages(destination) synapse-0.24.0/synapse/federation/transaction_queue.py000066400000000000000000000555001317335640100231740ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime from twisted.internet import defer from .persistence import TransactionActions from .units import Transaction, Edu from synapse.api.errors import HttpResponseException from synapse.util import logcontext from synapse.util.async import run_on_reactor from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter from synapse.util.metrics import measure_func from synapse.handlers.presence import format_user_presence_state, get_interested_remotes import synapse.metrics import logging logger = logging.getLogger(__name__) metrics = synapse.metrics.get_metrics_for(__name__) client_metrics = synapse.metrics.get_metrics_for("synapse.federation.client") sent_pdus_destination_dist = client_metrics.register_distribution( "sent_pdu_destinations" ) sent_edus_counter = client_metrics.register_counter("sent_edus") sent_transactions_counter = client_metrics.register_counter("sent_transactions") class TransactionQueue(object): """This class makes sure we only have one transaction in flight at a time for a given destination. It batches pending PDUs into single transactions. """ def __init__(self, hs): self.server_name = hs.hostname self.store = hs.get_datastore() self.state = hs.get_state_handler() self.transaction_actions = TransactionActions(self.store) self.transport_layer = hs.get_federation_transport_client() self.clock = hs.get_clock() self.is_mine_id = hs.is_mine_id # Is a mapping from destinations -> deferreds. Used to keep track # of which destinations have transactions in flight and when they are # done self.pending_transactions = {} metrics.register_callback( "pending_destinations", lambda: len(self.pending_transactions), ) # Is a mapping from destination -> list of # tuple(pending pdus, deferred, order) self.pending_pdus_by_dest = pdus = {} # destination -> list of tuple(edu, deferred) self.pending_edus_by_dest = edus = {} # Map of user_id -> UserPresenceState for all the pending presence # to be sent out by user_id. Entries here get processed and put in # pending_presence_by_dest self.pending_presence = {} # Map of destination -> user_id -> UserPresenceState of pending presence # to be sent to each destinations self.pending_presence_by_dest = presence = {} # Pending EDUs by their "key". Keyed EDUs are EDUs that get clobbered # based on their key (e.g. typing events by room_id) # Map of destination -> (edu_type, key) -> Edu self.pending_edus_keyed_by_dest = edus_keyed = {} metrics.register_callback( "pending_pdus", lambda: sum(map(len, pdus.values())), ) metrics.register_callback( "pending_edus", lambda: ( sum(map(len, edus.values())) + sum(map(len, presence.values())) + sum(map(len, edus_keyed.values())) ), ) # destination -> list of tuple(failure, deferred) self.pending_failures_by_dest = {} # destination -> stream_id of last successfully sent to-device message. # NB: may be a long or an int. self.last_device_stream_id_by_dest = {} # destination -> stream_id of last successfully sent device list # update. self.last_device_list_stream_id_by_dest = {} # HACK to get unique tx id self._next_txn_id = int(self.clock.time_msec()) self._order = 1 self._is_processing = False self._last_poked_id = -1 self._processing_pending_presence = False def can_send_to(self, destination): """Can we send messages to the given server? We can't send messages to ourselves. If we are running on localhost then we can only federation with other servers running on localhost. Otherwise we only federate with servers on a public domain. Args: destination(str): The server we are possibly trying to send to. Returns: bool: True if we can send to the server. """ if destination == self.server_name: return False if self.server_name.startswith("localhost"): return destination.startswith("localhost") else: return not destination.startswith("localhost") @defer.inlineCallbacks def notify_new_events(self, current_id): """This gets called when we have some new events we might want to send out to other servers. """ self._last_poked_id = max(current_id, self._last_poked_id) if self._is_processing: return try: self._is_processing = True while True: last_token = yield self.store.get_federation_out_pos("events") next_token, events = yield self.store.get_all_new_events_stream( last_token, self._last_poked_id, limit=20, ) logger.debug("Handling %s -> %s", last_token, next_token) if not events and next_token >= self._last_poked_id: break for event in events: # Only send events for this server. send_on_behalf_of = event.internal_metadata.get_send_on_behalf_of() is_mine = self.is_mine_id(event.event_id) if not is_mine and send_on_behalf_of is None: continue # Get the state from before the event. # We need to make sure that this is the state from before # the event and not from after it. # Otherwise if the last member on a server in a room is # banned then it won't receive the event because it won't # be in the room after the ban. destinations = yield self.state.get_current_hosts_in_room( event.room_id, latest_event_ids=[ prev_id for prev_id, _ in event.prev_events ], ) destinations = set(destinations) if send_on_behalf_of is not None: # If we are sending the event on behalf of another server # then it already has the event and there is no reason to # send the event to it. destinations.discard(send_on_behalf_of) logger.debug("Sending %s to %r", event, destinations) self._send_pdu(event, destinations) yield self.store.update_federation_out_pos( "events", next_token ) finally: self._is_processing = False def _send_pdu(self, pdu, destinations): # We loop through all destinations to see whether we already have # a transaction in progress. If we do, stick it in the pending_pdus # table and we'll get back to it later. order = self._order self._order += 1 destinations = set(destinations) destinations = set( dest for dest in destinations if self.can_send_to(dest) ) logger.debug("Sending to: %s", str(destinations)) if not destinations: return sent_pdus_destination_dist.inc_by(len(destinations)) for destination in destinations: self.pending_pdus_by_dest.setdefault(destination, []).append( (pdu, order) ) self._attempt_new_transaction(destination) @logcontext.preserve_fn # the caller should not yield on this @defer.inlineCallbacks def send_presence(self, states): """Send the new presence states to the appropriate destinations. This actually queues up the presence states ready for sending and triggers a background task to process them and send out the transactions. Args: states (list(UserPresenceState)) """ # First we queue up the new presence by user ID, so multiple presence # updates in quick successtion are correctly handled # We only want to send presence for our own users, so lets always just # filter here just in case. self.pending_presence.update({ state.user_id: state for state in states if self.is_mine_id(state.user_id) }) # We then handle the new pending presence in batches, first figuring # out the destinations we need to send each state to and then poking it # to attempt a new transaction. We linearize this so that we don't # accidentally mess up the ordering and send multiple presence updates # in the wrong order if self._processing_pending_presence: return self._processing_pending_presence = True try: while True: states_map = self.pending_presence self.pending_presence = {} if not states_map: break yield self._process_presence_inner(states_map.values()) finally: self._processing_pending_presence = False @measure_func("txnqueue._process_presence") @defer.inlineCallbacks def _process_presence_inner(self, states): """Given a list of states populate self.pending_presence_by_dest and poke to send a new transaction to each destination Args: states (list(UserPresenceState)) """ hosts_and_states = yield get_interested_remotes(self.store, states, self.state) for destinations, states in hosts_and_states: for destination in destinations: if not self.can_send_to(destination): continue self.pending_presence_by_dest.setdefault( destination, {} ).update({ state.user_id: state for state in states }) self._attempt_new_transaction(destination) def send_edu(self, destination, edu_type, content, key=None): edu = Edu( origin=self.server_name, destination=destination, edu_type=edu_type, content=content, ) if not self.can_send_to(destination): return sent_edus_counter.inc() if key: self.pending_edus_keyed_by_dest.setdefault( destination, {} )[(edu.edu_type, key)] = edu else: self.pending_edus_by_dest.setdefault(destination, []).append(edu) self._attempt_new_transaction(destination) def send_failure(self, failure, destination): if destination == self.server_name or destination == "localhost": return if not self.can_send_to(destination): return self.pending_failures_by_dest.setdefault( destination, [] ).append(failure) self._attempt_new_transaction(destination) def send_device_messages(self, destination): if destination == self.server_name or destination == "localhost": return if not self.can_send_to(destination): return self._attempt_new_transaction(destination) def get_current_token(self): return 0 def _attempt_new_transaction(self, destination): """Try to start a new transaction to this destination If there is already a transaction in progress to this destination, returns immediately. Otherwise kicks off the process of sending a transaction in the background. Args: destination (str): Returns: None """ # list of (pending_pdu, deferred, order) if destination in self.pending_transactions: # XXX: pending_transactions can get stuck on by a never-ending # request at which point pending_pdus_by_dest just keeps growing. # we need application-layer timeouts of some flavour of these # requests logger.debug( "TX [%s] Transaction already in progress", destination ) return logger.debug("TX [%s] Starting transaction loop", destination) # Drop the logcontext before starting the transaction. It doesn't # really make sense to log all the outbound transactions against # whatever path led us to this point: that's pretty arbitrary really. # # (this also means we can fire off _perform_transaction without # yielding) with logcontext.PreserveLoggingContext(): self._transaction_transmission_loop(destination) @defer.inlineCallbacks def _transaction_transmission_loop(self, destination): pending_pdus = [] try: self.pending_transactions[destination] = 1 # This will throw if we wouldn't retry. We do this here so we fail # quickly, but we will later check this again in the http client, # hence why we throw the result away. yield get_retry_limiter(destination, self.clock, self.store) # XXX: what's this for? yield run_on_reactor() pending_pdus = [] while True: device_message_edus, device_stream_id, dev_list_id = ( yield self._get_new_device_messages(destination) ) # BEGIN CRITICAL SECTION # # In order to avoid a race condition, we need to make sure that # the following code (from popping the queues up to the point # where we decide if we actually have any pending messages) is # atomic - otherwise new PDUs or EDUs might arrive in the # meantime, but not get sent because we hold the # pending_transactions flag. pending_pdus = self.pending_pdus_by_dest.pop(destination, []) pending_edus = self.pending_edus_by_dest.pop(destination, []) pending_presence = self.pending_presence_by_dest.pop(destination, {}) pending_failures = self.pending_failures_by_dest.pop(destination, []) pending_edus.extend( self.pending_edus_keyed_by_dest.pop(destination, {}).values() ) pending_edus.extend(device_message_edus) if pending_presence: pending_edus.append( Edu( origin=self.server_name, destination=destination, edu_type="m.presence", content={ "push": [ format_user_presence_state( presence, self.clock.time_msec() ) for presence in pending_presence.values() ] }, ) ) if pending_pdus: logger.debug("TX [%s] len(pending_pdus_by_dest[dest]) = %d", destination, len(pending_pdus)) if not pending_pdus and not pending_edus and not pending_failures: logger.debug("TX [%s] Nothing to send", destination) self.last_device_stream_id_by_dest[destination] = ( device_stream_id ) return # END CRITICAL SECTION success = yield self._send_new_transaction( destination, pending_pdus, pending_edus, pending_failures, ) if success: sent_transactions_counter.inc() # Remove the acknowledged device messages from the database # Only bother if we actually sent some device messages if device_message_edus: yield self.store.delete_device_msgs_for_remote( destination, device_stream_id ) logger.info("Marking as sent %r %r", destination, dev_list_id) yield self.store.mark_as_sent_devices_by_remote( destination, dev_list_id ) self.last_device_stream_id_by_dest[destination] = device_stream_id self.last_device_list_stream_id_by_dest[destination] = dev_list_id else: break except NotRetryingDestination as e: logger.debug( "TX [%s] not ready for retry yet (next retry at %s) - " "dropping transaction for now", destination, datetime.datetime.fromtimestamp( (e.retry_last_ts + e.retry_interval) / 1000.0 ), ) except Exception as e: logger.warn( "TX [%s] Failed to send transaction: %s", destination, e, ) for p, _ in pending_pdus: logger.info("Failed to send event %s to %s", p.event_id, destination) finally: # We want to be *very* sure we delete this after we stop processing self.pending_transactions.pop(destination, None) @defer.inlineCallbacks def _get_new_device_messages(self, destination): last_device_stream_id = self.last_device_stream_id_by_dest.get(destination, 0) to_device_stream_id = self.store.get_to_device_stream_token() contents, stream_id = yield self.store.get_new_device_msgs_for_remote( destination, last_device_stream_id, to_device_stream_id ) edus = [ Edu( origin=self.server_name, destination=destination, edu_type="m.direct_to_device", content=content, ) for content in contents ] last_device_list = self.last_device_list_stream_id_by_dest.get(destination, 0) now_stream_id, results = yield self.store.get_devices_by_remote( destination, last_device_list ) edus.extend( Edu( origin=self.server_name, destination=destination, edu_type="m.device_list_update", content=content, ) for content in results ) defer.returnValue((edus, stream_id, now_stream_id)) @measure_func("_send_new_transaction") @defer.inlineCallbacks def _send_new_transaction(self, destination, pending_pdus, pending_edus, pending_failures): # Sort based on the order field pending_pdus.sort(key=lambda t: t[1]) pdus = [x[0] for x in pending_pdus] edus = pending_edus failures = [x.get_dict() for x in pending_failures] success = True logger.debug("TX [%s] _attempt_new_transaction", destination) txn_id = str(self._next_txn_id) logger.debug( "TX [%s] {%s} Attempting new transaction" " (pdus: %d, edus: %d, failures: %d)", destination, txn_id, len(pdus), len(edus), len(failures) ) logger.debug("TX [%s] Persisting transaction...", destination) transaction = Transaction.create_new( origin_server_ts=int(self.clock.time_msec()), transaction_id=txn_id, origin=self.server_name, destination=destination, pdus=pdus, edus=edus, pdu_failures=failures, ) self._next_txn_id += 1 yield self.transaction_actions.prepare_to_send(transaction) logger.debug("TX [%s] Persisted transaction", destination) logger.info( "TX [%s] {%s} Sending transaction [%s]," " (PDUs: %d, EDUs: %d, failures: %d)", destination, txn_id, transaction.transaction_id, len(pdus), len(edus), len(failures), ) # Actually send the transaction # FIXME (erikj): This is a bit of a hack to make the Pdu age # keys work def json_data_cb(): data = transaction.get_dict() now = int(self.clock.time_msec()) if "pdus" in data: for p in data["pdus"]: if "age_ts" in p: unsigned = p.setdefault("unsigned", {}) unsigned["age"] = now - int(p["age_ts"]) del p["age_ts"] return data try: response = yield self.transport_layer.send_transaction( transaction, json_data_cb ) code = 200 if response: for e_id, r in response.get("pdus", {}).items(): if "error" in r: logger.warn( "Transaction returned error for %s: %s", e_id, r, ) except HttpResponseException as e: code = e.code response = e.response if e.code in (401, 404, 429) or 500 <= e.code: logger.info( "TX [%s] {%s} got %d response", destination, txn_id, code ) raise e logger.info( "TX [%s] {%s} got %d response", destination, txn_id, code ) logger.debug("TX [%s] Sent transaction", destination) logger.debug("TX [%s] Marking as delivered...", destination) yield self.transaction_actions.delivered( transaction, code, response ) logger.debug("TX [%s] Marked as delivered", destination) if code != 200: for p in pdus: logger.info( "Failed to send event %s to %s", p.event_id, destination ) success = False defer.returnValue(success) synapse-0.24.0/synapse/federation/transport/000077500000000000000000000000001317335640100211205ustar00rootroot00000000000000synapse-0.24.0/synapse/federation/transport/__init__.py000066400000000000000000000017121317335640100232320ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """The transport layer is responsible for both sending transactions to remote home servers and receiving a variety of requests from other home servers. By default this is done over HTTPS (and all home servers are required to support HTTPS), however individual pairings of servers may decide to communicate over a different (albeit still reliable) protocol. """ synapse-0.24.0/synapse/federation/transport/client.py000066400000000000000000000653601317335640100227620ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.constants import Membership from synapse.api.urls import FEDERATION_PREFIX as PREFIX from synapse.util.logutils import log_function import logging logger = logging.getLogger(__name__) class TransportLayerClient(object): """Sends federation HTTP requests to other servers""" def __init__(self, hs): self.server_name = hs.hostname self.client = hs.get_http_client() @log_function def get_room_state(self, destination, room_id, event_id): """ Requests all state for a given room from the given server at the given event. Args: destination (str): The host name of the remote home server we want to get the state from. context (str): The name of the context we want the state of event_id (str): The event we want the context at. Returns: Deferred: Results in a dict received from the remote homeserver. """ logger.debug("get_room_state dest=%s, room=%s", destination, room_id) path = PREFIX + "/state/%s/" % room_id return self.client.get_json( destination, path=path, args={"event_id": event_id}, ) @log_function def get_room_state_ids(self, destination, room_id, event_id): """ Requests all state for a given room from the given server at the given event. Returns the state's event_id's Args: destination (str): The host name of the remote home server we want to get the state from. context (str): The name of the context we want the state of event_id (str): The event we want the context at. Returns: Deferred: Results in a dict received from the remote homeserver. """ logger.debug("get_room_state_ids dest=%s, room=%s", destination, room_id) path = PREFIX + "/state_ids/%s/" % room_id return self.client.get_json( destination, path=path, args={"event_id": event_id}, ) @log_function def get_event(self, destination, event_id, timeout=None): """ Requests the pdu with give id and origin from the given server. Args: destination (str): The host name of the remote home server we want to get the state from. event_id (str): The id of the event being requested. timeout (int): How long to try (in ms) the destination for before giving up. None indicates no timeout. Returns: Deferred: Results in a dict received from the remote homeserver. """ logger.debug("get_pdu dest=%s, event_id=%s", destination, event_id) path = PREFIX + "/event/%s/" % (event_id, ) return self.client.get_json(destination, path=path, timeout=timeout) @log_function def backfill(self, destination, room_id, event_tuples, limit): """ Requests `limit` previous PDUs in a given context before list of PDUs. Args: dest (str) room_id (str) event_tuples (list) limt (int) Returns: Deferred: Results in a dict received from the remote homeserver. """ logger.debug( "backfill dest=%s, room_id=%s, event_tuples=%s, limit=%s", destination, room_id, repr(event_tuples), str(limit) ) if not event_tuples: # TODO: raise? return path = PREFIX + "/backfill/%s/" % (room_id,) args = { "v": event_tuples, "limit": [str(limit)], } return self.client.get_json( destination, path=path, args=args, ) @defer.inlineCallbacks @log_function def send_transaction(self, transaction, json_data_callback=None): """ Sends the given Transaction to its destination Args: transaction (Transaction) Returns: Deferred: Results of the deferred is a tuple in the form of (response_code, response_body) where the response_body is a python dict decoded from json """ logger.debug( "send_data dest=%s, txid=%s", transaction.destination, transaction.transaction_id ) if transaction.destination == self.server_name: raise RuntimeError("Transport layer cannot send to itself!") # FIXME: This is only used by the tests. The actual json sent is # generated by the json_data_callback. json_data = transaction.get_dict() response = yield self.client.put_json( transaction.destination, path=PREFIX + "/send/%s/" % transaction.transaction_id, data=json_data, json_data_callback=json_data_callback, long_retries=True, backoff_on_404=True, # If we get a 404 the other side has gone ) logger.debug( "send_data dest=%s, txid=%s, got response: 200", transaction.destination, transaction.transaction_id, ) defer.returnValue(response) @defer.inlineCallbacks @log_function def make_query(self, destination, query_type, args, retry_on_dns_fail, ignore_backoff=False): path = PREFIX + "/query/%s" % query_type content = yield self.client.get_json( destination=destination, path=path, args=args, retry_on_dns_fail=retry_on_dns_fail, timeout=10000, ignore_backoff=ignore_backoff, ) defer.returnValue(content) @defer.inlineCallbacks @log_function def make_membership_event(self, destination, room_id, user_id, membership): """Asks a remote server to build and sign us a membership event Note that this does not append any events to any graphs. Args: destination (str): address of remote homeserver room_id (str): room to join/leave user_id (str): user to be joined/left membership (str): one of join/leave Returns: Deferred: Succeeds when we get a 2xx HTTP response. The result will be the decoded JSON body (ie, the new event). Fails with ``HTTPRequestException`` if we get an HTTP response code >= 300. Fails with ``NotRetryingDestination`` if we are not yet ready to retry this server. """ valid_memberships = {Membership.JOIN, Membership.LEAVE} if membership not in valid_memberships: raise RuntimeError( "make_membership_event called with membership='%s', must be one of %s" % (membership, ",".join(valid_memberships)) ) path = PREFIX + "/make_%s/%s/%s" % (membership, room_id, user_id) ignore_backoff = False retry_on_dns_fail = False if membership == Membership.LEAVE: # we particularly want to do our best to send leave events. The # problem is that if it fails, we won't retry it later, so if the # remote server was just having a momentary blip, the room will be # out of sync. ignore_backoff = True retry_on_dns_fail = True content = yield self.client.get_json( destination=destination, path=path, retry_on_dns_fail=retry_on_dns_fail, timeout=20000, ignore_backoff=ignore_backoff, ) defer.returnValue(content) @defer.inlineCallbacks @log_function def send_join(self, destination, room_id, event_id, content): path = PREFIX + "/send_join/%s/%s" % (room_id, event_id) response = yield self.client.put_json( destination=destination, path=path, data=content, ) defer.returnValue(response) @defer.inlineCallbacks @log_function def send_leave(self, destination, room_id, event_id, content): path = PREFIX + "/send_leave/%s/%s" % (room_id, event_id) response = yield self.client.put_json( destination=destination, path=path, data=content, # we want to do our best to send this through. The problem is # that if it fails, we won't retry it later, so if the remote # server was just having a momentary blip, the room will be out of # sync. ignore_backoff=True, ) defer.returnValue(response) @defer.inlineCallbacks @log_function def send_invite(self, destination, room_id, event_id, content): path = PREFIX + "/invite/%s/%s" % (room_id, event_id) response = yield self.client.put_json( destination=destination, path=path, data=content, ignore_backoff=True, ) defer.returnValue(response) @defer.inlineCallbacks @log_function def get_public_rooms(self, remote_server, limit, since_token, search_filter=None, include_all_networks=False, third_party_instance_id=None): path = PREFIX + "/publicRooms" args = { "include_all_networks": "true" if include_all_networks else "false", } if third_party_instance_id: args["third_party_instance_id"] = third_party_instance_id, if limit: args["limit"] = [str(limit)] if since_token: args["since"] = [since_token] # TODO(erikj): Actually send the search_filter across federation. response = yield self.client.get_json( destination=remote_server, path=path, args=args, ignore_backoff=True, ) defer.returnValue(response) @defer.inlineCallbacks @log_function def exchange_third_party_invite(self, destination, room_id, event_dict): path = PREFIX + "/exchange_third_party_invite/%s" % (room_id,) response = yield self.client.put_json( destination=destination, path=path, data=event_dict, ) defer.returnValue(response) @defer.inlineCallbacks @log_function def get_event_auth(self, destination, room_id, event_id): path = PREFIX + "/event_auth/%s/%s" % (room_id, event_id) content = yield self.client.get_json( destination=destination, path=path, ) defer.returnValue(content) @defer.inlineCallbacks @log_function def send_query_auth(self, destination, room_id, event_id, content): path = PREFIX + "/query_auth/%s/%s" % (room_id, event_id) content = yield self.client.post_json( destination=destination, path=path, data=content, ) defer.returnValue(content) @defer.inlineCallbacks @log_function def query_client_keys(self, destination, query_content, timeout): """Query the device keys for a list of user ids hosted on a remote server. Request: { "device_keys": { "": [""] } } Response: { "device_keys": { "": { "": {...} } } } Args: destination(str): The server to query. query_content(dict): The user ids to query. Returns: A dict containg the device keys. """ path = PREFIX + "/user/keys/query" content = yield self.client.post_json( destination=destination, path=path, data=query_content, timeout=timeout, ) defer.returnValue(content) @defer.inlineCallbacks @log_function def query_user_devices(self, destination, user_id, timeout): """Query the devices for a user id hosted on a remote server. Response: { "stream_id": "...", "devices": [ { ... } ] } Args: destination(str): The server to query. query_content(dict): The user ids to query. Returns: A dict containg the device keys. """ path = PREFIX + "/user/devices/" + user_id content = yield self.client.get_json( destination=destination, path=path, timeout=timeout, ) defer.returnValue(content) @defer.inlineCallbacks @log_function def claim_client_keys(self, destination, query_content, timeout): """Claim one-time keys for a list of devices hosted on a remote server. Request: { "one_time_keys": { "": { "": "" } } } Response: { "device_keys": { "": { "": { ":": "" } } } } Args: destination(str): The server to query. query_content(dict): The user ids to query. Returns: A dict containg the one-time keys. """ path = PREFIX + "/user/keys/claim" content = yield self.client.post_json( destination=destination, path=path, data=query_content, timeout=timeout, ) defer.returnValue(content) @defer.inlineCallbacks @log_function def get_missing_events(self, destination, room_id, earliest_events, latest_events, limit, min_depth, timeout): path = PREFIX + "/get_missing_events/%s" % (room_id,) content = yield self.client.post_json( destination=destination, path=path, data={ "limit": int(limit), "min_depth": int(min_depth), "earliest_events": earliest_events, "latest_events": latest_events, }, timeout=timeout, ) defer.returnValue(content) @log_function def get_group_profile(self, destination, group_id, requester_user_id): """Get a group profile """ path = PREFIX + "/groups/%s/profile" % (group_id,) return self.client.get_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, ignore_backoff=True, ) @log_function def get_group_summary(self, destination, group_id, requester_user_id): """Get a group summary """ path = PREFIX + "/groups/%s/summary" % (group_id,) return self.client.get_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, ignore_backoff=True, ) @log_function def get_rooms_in_group(self, destination, group_id, requester_user_id): """Get all rooms in a group """ path = PREFIX + "/groups/%s/rooms" % (group_id,) return self.client.get_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, ignore_backoff=True, ) def add_room_to_group(self, destination, group_id, requester_user_id, room_id, content): """Add a room to a group """ path = PREFIX + "/groups/%s/room/%s" % (group_id, room_id,) return self.client.post_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, data=content, ignore_backoff=True, ) def remove_room_from_group(self, destination, group_id, requester_user_id, room_id): """Remove a room from a group """ path = PREFIX + "/groups/%s/room/%s" % (group_id, room_id,) return self.client.delete_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, ignore_backoff=True, ) @log_function def get_users_in_group(self, destination, group_id, requester_user_id): """Get users in a group """ path = PREFIX + "/groups/%s/users" % (group_id,) return self.client.get_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, ignore_backoff=True, ) @log_function def get_invited_users_in_group(self, destination, group_id, requester_user_id): """Get users that have been invited to a group """ path = PREFIX + "/groups/%s/invited_users" % (group_id,) return self.client.get_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, ignore_backoff=True, ) @log_function def accept_group_invite(self, destination, group_id, user_id, content): """Accept a group invite """ path = PREFIX + "/groups/%s/users/%s/accept_invite" % (group_id, user_id) return self.client.post_json( destination=destination, path=path, data=content, ignore_backoff=True, ) @log_function def invite_to_group(self, destination, group_id, user_id, requester_user_id, content): """Invite a user to a group """ path = PREFIX + "/groups/%s/users/%s/invite" % (group_id, user_id) return self.client.post_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, data=content, ignore_backoff=True, ) @log_function def invite_to_group_notification(self, destination, group_id, user_id, content): """Sent by group server to inform a user's server that they have been invited. """ path = PREFIX + "/groups/local/%s/users/%s/invite" % (group_id, user_id) return self.client.post_json( destination=destination, path=path, data=content, ignore_backoff=True, ) @log_function def remove_user_from_group(self, destination, group_id, requester_user_id, user_id, content): """Remove a user fron a group """ path = PREFIX + "/groups/%s/users/%s/remove" % (group_id, user_id) return self.client.post_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, data=content, ignore_backoff=True, ) @log_function def remove_user_from_group_notification(self, destination, group_id, user_id, content): """Sent by group server to inform a user's server that they have been kicked from the group. """ path = PREFIX + "/groups/local/%s/users/%s/remove" % (group_id, user_id) return self.client.post_json( destination=destination, path=path, data=content, ignore_backoff=True, ) @log_function def renew_group_attestation(self, destination, group_id, user_id, content): """Sent by either a group server or a user's server to periodically update the attestations """ path = PREFIX + "/groups/%s/renew_attestation/%s" % (group_id, user_id) return self.client.post_json( destination=destination, path=path, data=content, ignore_backoff=True, ) @log_function def update_group_summary_room(self, destination, group_id, user_id, room_id, category_id, content): """Update a room entry in a group summary """ if category_id: path = PREFIX + "/groups/%s/summary/categories/%s/rooms/%s" % ( group_id, category_id, room_id, ) else: path = PREFIX + "/groups/%s/summary/rooms/%s" % (group_id, room_id,) return self.client.post_json( destination=destination, path=path, args={"requester_user_id": user_id}, data=content, ignore_backoff=True, ) @log_function def delete_group_summary_room(self, destination, group_id, user_id, room_id, category_id): """Delete a room entry in a group summary """ if category_id: path = PREFIX + "/groups/%s/summary/categories/%s/rooms/%s" % ( group_id, category_id, room_id, ) else: path = PREFIX + "/groups/%s/summary/rooms/%s" % (group_id, room_id,) return self.client.delete_json( destination=destination, path=path, args={"requester_user_id": user_id}, ignore_backoff=True, ) @log_function def get_group_categories(self, destination, group_id, requester_user_id): """Get all categories in a group """ path = PREFIX + "/groups/%s/categories" % (group_id,) return self.client.get_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, ignore_backoff=True, ) @log_function def get_group_category(self, destination, group_id, requester_user_id, category_id): """Get category info in a group """ path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,) return self.client.get_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, ignore_backoff=True, ) @log_function def update_group_category(self, destination, group_id, requester_user_id, category_id, content): """Update a category in a group """ path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,) return self.client.post_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, data=content, ignore_backoff=True, ) @log_function def delete_group_category(self, destination, group_id, requester_user_id, category_id): """Delete a category in a group """ path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,) return self.client.delete_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, ignore_backoff=True, ) @log_function def get_group_roles(self, destination, group_id, requester_user_id): """Get all roles in a group """ path = PREFIX + "/groups/%s/roles" % (group_id,) return self.client.get_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, ignore_backoff=True, ) @log_function def get_group_role(self, destination, group_id, requester_user_id, role_id): """Get a roles info """ path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,) return self.client.get_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, ignore_backoff=True, ) @log_function def update_group_role(self, destination, group_id, requester_user_id, role_id, content): """Update a role in a group """ path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,) return self.client.post_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, data=content, ignore_backoff=True, ) @log_function def delete_group_role(self, destination, group_id, requester_user_id, role_id): """Delete a role in a group """ path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,) return self.client.delete_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, ignore_backoff=True, ) @log_function def update_group_summary_user(self, destination, group_id, requester_user_id, user_id, role_id, content): """Update a users entry in a group """ if role_id: path = PREFIX + "/groups/%s/summary/roles/%s/users/%s" % ( group_id, role_id, user_id, ) else: path = PREFIX + "/groups/%s/summary/users/%s" % (group_id, user_id,) return self.client.post_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, data=content, ignore_backoff=True, ) @log_function def delete_group_summary_user(self, destination, group_id, requester_user_id, user_id, role_id): """Delete a users entry in a group """ if role_id: path = PREFIX + "/groups/%s/summary/roles/%s/users/%s" % ( group_id, role_id, user_id, ) else: path = PREFIX + "/groups/%s/summary/users/%s" % (group_id, user_id,) return self.client.delete_json( destination=destination, path=path, args={"requester_user_id": requester_user_id}, ignore_backoff=True, ) def bulk_get_publicised_groups(self, destination, user_ids): """Get the groups a list of users are publicising """ path = PREFIX + "/get_groups_publicised" content = {"user_ids": user_ids} return self.client.post_json( destination=destination, path=path, data=content, ignore_backoff=True, ) synapse-0.24.0/synapse/federation/transport/server.py000066400000000000000000001163441317335640100230110ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.urls import FEDERATION_PREFIX as PREFIX from synapse.api.errors import Codes, SynapseError from synapse.http.server import JsonResource from synapse.http.servlet import ( parse_json_object_from_request, parse_integer_from_args, parse_string_from_args, parse_boolean_from_args, ) from synapse.util.ratelimitutils import FederationRateLimiter from synapse.util.versionstring import get_version_string from synapse.util.logcontext import preserve_fn from synapse.types import ThirdPartyInstanceID, get_domain_from_id import functools import logging import re import synapse logger = logging.getLogger(__name__) class TransportLayerServer(JsonResource): """Handles incoming federation HTTP requests""" def __init__(self, hs): self.hs = hs self.clock = hs.get_clock() super(TransportLayerServer, self).__init__(hs, canonical_json=False) self.authenticator = Authenticator(hs) self.ratelimiter = FederationRateLimiter( self.clock, window_size=hs.config.federation_rc_window_size, sleep_limit=hs.config.federation_rc_sleep_limit, sleep_msec=hs.config.federation_rc_sleep_delay, reject_limit=hs.config.federation_rc_reject_limit, concurrent_requests=hs.config.federation_rc_concurrent, ) self.register_servlets() def register_servlets(self): register_servlets( self.hs, resource=self, ratelimiter=self.ratelimiter, authenticator=self.authenticator, ) class AuthenticationError(SynapseError): """There was a problem authenticating the request""" pass class NoAuthenticationError(AuthenticationError): """The request had no authentication information""" pass class Authenticator(object): def __init__(self, hs): self.keyring = hs.get_keyring() self.server_name = hs.hostname self.store = hs.get_datastore() # A method just so we can pass 'self' as the authenticator to the Servlets @defer.inlineCallbacks def authenticate_request(self, request, content): json_request = { "method": request.method, "uri": request.uri, "destination": self.server_name, "signatures": {}, } if content is not None: json_request["content"] = content origin = None def parse_auth_header(header_str): try: params = auth.split(" ")[1].split(",") param_dict = dict(kv.split("=") for kv in params) def strip_quotes(value): if value.startswith("\""): return value[1:-1] else: return value origin = strip_quotes(param_dict["origin"]) key = strip_quotes(param_dict["key"]) sig = strip_quotes(param_dict["sig"]) return (origin, key, sig) except: raise AuthenticationError( 400, "Malformed Authorization header", Codes.UNAUTHORIZED ) auth_headers = request.requestHeaders.getRawHeaders(b"Authorization") if not auth_headers: raise NoAuthenticationError( 401, "Missing Authorization headers", Codes.UNAUTHORIZED, ) for auth in auth_headers: if auth.startswith("X-Matrix"): (origin, key, sig) = parse_auth_header(auth) json_request["origin"] = origin json_request["signatures"].setdefault(origin, {})[key] = sig if not json_request["signatures"]: raise NoAuthenticationError( 401, "Missing Authorization headers", Codes.UNAUTHORIZED, ) yield self.keyring.verify_json_for_server(origin, json_request) logger.info("Request from %s", origin) request.authenticated_entity = origin # If we get a valid signed request from the other side, its probably # alive retry_timings = yield self.store.get_destination_retry_timings(origin) if retry_timings and retry_timings["retry_last_ts"]: logger.info("Marking origin %r as up", origin) preserve_fn(self.store.set_destination_retry_timings)(origin, 0, 0) defer.returnValue(origin) class BaseFederationServlet(object): REQUIRE_AUTH = True def __init__(self, handler, authenticator, ratelimiter, server_name): self.handler = handler self.authenticator = authenticator self.ratelimiter = ratelimiter def _wrap(self, func): authenticator = self.authenticator ratelimiter = self.ratelimiter @defer.inlineCallbacks @functools.wraps(func) def new_func(request, *args, **kwargs): content = None if request.method in ["PUT", "POST"]: # TODO: Handle other method types? other content types? content = parse_json_object_from_request(request) try: origin = yield authenticator.authenticate_request(request, content) except NoAuthenticationError: origin = None if self.REQUIRE_AUTH: logger.exception("authenticate_request failed") raise except: logger.exception("authenticate_request failed") raise if origin: with ratelimiter.ratelimit(origin) as d: yield d response = yield func( origin, content, request.args, *args, **kwargs ) else: response = yield func( origin, content, request.args, *args, **kwargs ) defer.returnValue(response) # Extra logic that functools.wraps() doesn't finish new_func.__self__ = func.__self__ return new_func def register(self, server): pattern = re.compile("^" + PREFIX + self.PATH + "$") for method in ("GET", "PUT", "POST"): code = getattr(self, "on_%s" % (method), None) if code is None: continue server.register_paths(method, (pattern,), self._wrap(code)) class FederationSendServlet(BaseFederationServlet): PATH = "/send/(?P[^/]*)/" def __init__(self, handler, server_name, **kwargs): super(FederationSendServlet, self).__init__( handler, server_name=server_name, **kwargs ) self.server_name = server_name # This is when someone is trying to send us a bunch of data. @defer.inlineCallbacks def on_PUT(self, origin, content, query, transaction_id): """ Called on PUT /send// Args: request (twisted.web.http.Request): The HTTP request. transaction_id (str): The transaction_id associated with this request. This is *not* None. Returns: Deferred: Results in a tuple of `(code, response)`, where `response` is a python dict to be converted into JSON that is used as the response body. """ # Parse the request try: transaction_data = content logger.debug( "Decoded %s: %s", transaction_id, str(transaction_data) ) logger.info( "Received txn %s from %s. (PDUs: %d, EDUs: %d, failures: %d)", transaction_id, origin, len(transaction_data.get("pdus", [])), len(transaction_data.get("edus", [])), len(transaction_data.get("failures", [])), ) # We should ideally be getting this from the security layer. # origin = body["origin"] # Add some extra data to the transaction dict that isn't included # in the request body. transaction_data.update( transaction_id=transaction_id, destination=self.server_name ) except Exception as e: logger.exception(e) defer.returnValue((400, {"error": "Invalid transaction"})) return try: code, response = yield self.handler.on_incoming_transaction( transaction_data ) except: logger.exception("on_incoming_transaction failed") raise defer.returnValue((code, response)) class FederationPullServlet(BaseFederationServlet): PATH = "/pull/" # This is for when someone asks us for everything since version X def on_GET(self, origin, content, query): return self.handler.on_pull_request(query["origin"][0], query["v"]) class FederationEventServlet(BaseFederationServlet): PATH = "/event/(?P[^/]*)/" # This is when someone asks for a data item for a given server data_id pair. def on_GET(self, origin, content, query, event_id): return self.handler.on_pdu_request(origin, event_id) class FederationStateServlet(BaseFederationServlet): PATH = "/state/(?P[^/]*)/" # This is when someone asks for all data for a given context. def on_GET(self, origin, content, query, context): return self.handler.on_context_state_request( origin, context, query.get("event_id", [None])[0], ) class FederationStateIdsServlet(BaseFederationServlet): PATH = "/state_ids/(?P[^/]*)/" def on_GET(self, origin, content, query, room_id): return self.handler.on_state_ids_request( origin, room_id, query.get("event_id", [None])[0], ) class FederationBackfillServlet(BaseFederationServlet): PATH = "/backfill/(?P[^/]*)/" def on_GET(self, origin, content, query, context): versions = query["v"] limits = query["limit"] if not limits: return defer.succeed((400, {"error": "Did not include limit param"})) limit = int(limits[-1]) return self.handler.on_backfill_request(origin, context, versions, limit) class FederationQueryServlet(BaseFederationServlet): PATH = "/query/(?P[^/]*)" # This is when we receive a server-server Query def on_GET(self, origin, content, query, query_type): return self.handler.on_query_request( query_type, {k: v[0].decode("utf-8") for k, v in query.items()} ) class FederationMakeJoinServlet(BaseFederationServlet): PATH = "/make_join/(?P[^/]*)/(?P[^/]*)" @defer.inlineCallbacks def on_GET(self, origin, content, query, context, user_id): content = yield self.handler.on_make_join_request(context, user_id) defer.returnValue((200, content)) class FederationMakeLeaveServlet(BaseFederationServlet): PATH = "/make_leave/(?P[^/]*)/(?P[^/]*)" @defer.inlineCallbacks def on_GET(self, origin, content, query, context, user_id): content = yield self.handler.on_make_leave_request(context, user_id) defer.returnValue((200, content)) class FederationSendLeaveServlet(BaseFederationServlet): PATH = "/send_leave/(?P[^/]*)/(?P[^/]*)" @defer.inlineCallbacks def on_PUT(self, origin, content, query, room_id, txid): content = yield self.handler.on_send_leave_request(origin, content) defer.returnValue((200, content)) class FederationEventAuthServlet(BaseFederationServlet): PATH = "/event_auth/(?P[^/]*)/(?P[^/]*)" def on_GET(self, origin, content, query, context, event_id): return self.handler.on_event_auth(origin, context, event_id) class FederationSendJoinServlet(BaseFederationServlet): PATH = "/send_join/(?P[^/]*)/(?P[^/]*)" @defer.inlineCallbacks def on_PUT(self, origin, content, query, context, event_id): # TODO(paul): assert that context/event_id parsed from path actually # match those given in content content = yield self.handler.on_send_join_request(origin, content) defer.returnValue((200, content)) class FederationInviteServlet(BaseFederationServlet): PATH = "/invite/(?P[^/]*)/(?P[^/]*)" @defer.inlineCallbacks def on_PUT(self, origin, content, query, context, event_id): # TODO(paul): assert that context/event_id parsed from path actually # match those given in content content = yield self.handler.on_invite_request(origin, content) defer.returnValue((200, content)) class FederationThirdPartyInviteExchangeServlet(BaseFederationServlet): PATH = "/exchange_third_party_invite/(?P[^/]*)" @defer.inlineCallbacks def on_PUT(self, origin, content, query, room_id): content = yield self.handler.on_exchange_third_party_invite_request( origin, room_id, content ) defer.returnValue((200, content)) class FederationClientKeysQueryServlet(BaseFederationServlet): PATH = "/user/keys/query" def on_POST(self, origin, content, query): return self.handler.on_query_client_keys(origin, content) class FederationUserDevicesQueryServlet(BaseFederationServlet): PATH = "/user/devices/(?P[^/]*)" def on_GET(self, origin, content, query, user_id): return self.handler.on_query_user_devices(origin, user_id) class FederationClientKeysClaimServlet(BaseFederationServlet): PATH = "/user/keys/claim" @defer.inlineCallbacks def on_POST(self, origin, content, query): response = yield self.handler.on_claim_client_keys(origin, content) defer.returnValue((200, response)) class FederationQueryAuthServlet(BaseFederationServlet): PATH = "/query_auth/(?P[^/]*)/(?P[^/]*)" @defer.inlineCallbacks def on_POST(self, origin, content, query, context, event_id): new_content = yield self.handler.on_query_auth_request( origin, content, context, event_id ) defer.returnValue((200, new_content)) class FederationGetMissingEventsServlet(BaseFederationServlet): # TODO(paul): Why does this path alone end with "/?" optional? PATH = "/get_missing_events/(?P[^/]*)/?" @defer.inlineCallbacks def on_POST(self, origin, content, query, room_id): limit = int(content.get("limit", 10)) min_depth = int(content.get("min_depth", 0)) earliest_events = content.get("earliest_events", []) latest_events = content.get("latest_events", []) content = yield self.handler.on_get_missing_events( origin, room_id=room_id, earliest_events=earliest_events, latest_events=latest_events, min_depth=min_depth, limit=limit, ) defer.returnValue((200, content)) class On3pidBindServlet(BaseFederationServlet): PATH = "/3pid/onbind" REQUIRE_AUTH = False @defer.inlineCallbacks def on_POST(self, origin, content, query): if "invites" in content: last_exception = None for invite in content["invites"]: try: if "signed" not in invite or "token" not in invite["signed"]: message = ("Rejecting received notification of third-" "party invite without signed: %s" % (invite,)) logger.info(message) raise SynapseError(400, message) yield self.handler.exchange_third_party_invite( invite["sender"], invite["mxid"], invite["room_id"], invite["signed"], ) except Exception as e: last_exception = e if last_exception: raise last_exception defer.returnValue((200, {})) class OpenIdUserInfo(BaseFederationServlet): """ Exchange a bearer token for information about a user. The response format should be compatible with: http://openid.net/specs/openid-connect-core-1_0.html#UserInfoResponse GET /openid/userinfo?access_token=ABDEFGH HTTP/1.1 HTTP/1.1 200 OK Content-Type: application/json { "sub": "@userpart:example.org", } """ PATH = "/openid/userinfo" REQUIRE_AUTH = False @defer.inlineCallbacks def on_GET(self, origin, content, query): token = query.get("access_token", [None])[0] if token is None: defer.returnValue((401, { "errcode": "M_MISSING_TOKEN", "error": "Access Token required" })) return user_id = yield self.handler.on_openid_userinfo(token) if user_id is None: defer.returnValue((401, { "errcode": "M_UNKNOWN_TOKEN", "error": "Access Token unknown or expired" })) defer.returnValue((200, {"sub": user_id})) class PublicRoomList(BaseFederationServlet): """ Fetch the public room list for this server. This API returns information in the same format as /publicRooms on the client API, but will only ever include local public rooms and hence is intended for consumption by other home servers. GET /publicRooms HTTP/1.1 HTTP/1.1 200 OK Content-Type: application/json { "chunk": [ { "aliases": [ "#test:localhost" ], "guest_can_join": false, "name": "test room", "num_joined_members": 3, "room_id": "!whkydVegtvatLfXmPN:localhost", "world_readable": false } ], "end": "END", "start": "START" } """ PATH = "/publicRooms" @defer.inlineCallbacks def on_GET(self, origin, content, query): limit = parse_integer_from_args(query, "limit", 0) since_token = parse_string_from_args(query, "since", None) include_all_networks = parse_boolean_from_args( query, "include_all_networks", False ) third_party_instance_id = parse_string_from_args( query, "third_party_instance_id", None ) if include_all_networks: network_tuple = None elif third_party_instance_id: network_tuple = ThirdPartyInstanceID.from_string(third_party_instance_id) else: network_tuple = ThirdPartyInstanceID(None, None) data = yield self.handler.get_local_public_room_list( limit, since_token, network_tuple=network_tuple ) defer.returnValue((200, data)) class FederationVersionServlet(BaseFederationServlet): PATH = "/version" REQUIRE_AUTH = False def on_GET(self, origin, content, query): return defer.succeed((200, { "server": { "name": "Synapse", "version": get_version_string(synapse) }, })) class FederationGroupsProfileServlet(BaseFederationServlet): """Get the basic profile of a group on behalf of a user """ PATH = "/groups/(?P[^/]*)/profile$" @defer.inlineCallbacks def on_GET(self, origin, content, query, group_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") new_content = yield self.handler.get_group_profile( group_id, requester_user_id ) defer.returnValue((200, new_content)) class FederationGroupsSummaryServlet(BaseFederationServlet): PATH = "/groups/(?P[^/]*)/summary$" @defer.inlineCallbacks def on_GET(self, origin, content, query, group_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") new_content = yield self.handler.get_group_summary( group_id, requester_user_id ) defer.returnValue((200, new_content)) @defer.inlineCallbacks def on_POST(self, origin, content, query, group_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") new_content = yield self.handler.update_group_profile( group_id, requester_user_id, content ) defer.returnValue((200, new_content)) class FederationGroupsRoomsServlet(BaseFederationServlet): """Get the rooms in a group on behalf of a user """ PATH = "/groups/(?P[^/]*)/rooms$" @defer.inlineCallbacks def on_GET(self, origin, content, query, group_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") new_content = yield self.handler.get_rooms_in_group( group_id, requester_user_id ) defer.returnValue((200, new_content)) class FederationGroupsAddRoomsServlet(BaseFederationServlet): """Add/remove room from group """ PATH = "/groups/(?P[^/]*)/room/(?)$" @defer.inlineCallbacks def on_POST(self, origin, content, query, group_id, room_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") new_content = yield self.handler.add_room_to_group( group_id, requester_user_id, room_id, content ) defer.returnValue((200, new_content)) @defer.inlineCallbacks def on_DELETE(self, origin, content, query, group_id, room_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") new_content = yield self.handler.remove_room_from_group( group_id, requester_user_id, room_id, ) defer.returnValue((200, new_content)) class FederationGroupsUsersServlet(BaseFederationServlet): """Get the users in a group on behalf of a user """ PATH = "/groups/(?P[^/]*)/users$" @defer.inlineCallbacks def on_GET(self, origin, content, query, group_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") new_content = yield self.handler.get_users_in_group( group_id, requester_user_id ) defer.returnValue((200, new_content)) class FederationGroupsInvitedUsersServlet(BaseFederationServlet): """Get the users that have been invited to a group """ PATH = "/groups/(?P[^/]*)/invited_users$" @defer.inlineCallbacks def on_GET(self, origin, content, query, group_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") new_content = yield self.handler.get_invited_users_in_group( group_id, requester_user_id ) defer.returnValue((200, new_content)) class FederationGroupsInviteServlet(BaseFederationServlet): """Ask a group server to invite someone to the group """ PATH = "/groups/(?P[^/]*)/users/(?P[^/]*)/invite$" @defer.inlineCallbacks def on_POST(self, origin, content, query, group_id, user_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") new_content = yield self.handler.invite_to_group( group_id, user_id, requester_user_id, content, ) defer.returnValue((200, new_content)) class FederationGroupsAcceptInviteServlet(BaseFederationServlet): """Accept an invitation from the group server """ PATH = "/groups/(?P[^/]*)/users/(?P[^/]*)/accept_invite$" @defer.inlineCallbacks def on_POST(self, origin, content, query, group_id, user_id): if get_domain_from_id(user_id) != origin: raise SynapseError(403, "user_id doesn't match origin") new_content = yield self.handler.accept_invite( group_id, user_id, content, ) defer.returnValue((200, new_content)) class FederationGroupsRemoveUserServlet(BaseFederationServlet): """Leave or kick a user from the group """ PATH = "/groups/(?P[^/]*)/users/(?P[^/]*)/remove$" @defer.inlineCallbacks def on_POST(self, origin, content, query, group_id, user_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") new_content = yield self.handler.remove_user_from_group( group_id, user_id, requester_user_id, content, ) defer.returnValue((200, new_content)) class FederationGroupsLocalInviteServlet(BaseFederationServlet): """A group server has invited a local user """ PATH = "/groups/local/(?P[^/]*)/users/(?P[^/]*)/invite$" @defer.inlineCallbacks def on_POST(self, origin, content, query, group_id, user_id): if get_domain_from_id(group_id) != origin: raise SynapseError(403, "group_id doesn't match origin") new_content = yield self.handler.on_invite( group_id, user_id, content, ) defer.returnValue((200, new_content)) class FederationGroupsRemoveLocalUserServlet(BaseFederationServlet): """A group server has removed a local user """ PATH = "/groups/local/(?P[^/]*)/users/(?P[^/]*)/remove$" @defer.inlineCallbacks def on_POST(self, origin, content, query, group_id, user_id): if get_domain_from_id(group_id) != origin: raise SynapseError(403, "user_id doesn't match origin") new_content = yield self.handler.user_removed_from_group( group_id, user_id, content, ) defer.returnValue((200, new_content)) class FederationGroupsRenewAttestaionServlet(BaseFederationServlet): """A group or user's server renews their attestation """ PATH = "/groups/(?P[^/]*)/renew_attestation/(?P[^/]*)$" @defer.inlineCallbacks def on_POST(self, origin, content, query, group_id, user_id): # We don't need to check auth here as we check the attestation signatures new_content = yield self.handler.on_renew_attestation( group_id, user_id, content ) defer.returnValue((200, new_content)) class FederationGroupsSummaryRoomsServlet(BaseFederationServlet): """Add/remove a room from the group summary, with optional category. Matches both: - /groups/:group/summary/rooms/:room_id - /groups/:group/summary/categories/:category/rooms/:room_id """ PATH = ( "/groups/(?P[^/]*)/summary" "(/categories/(?P[^/]+))?" "/rooms/(?P[^/]*)$" ) @defer.inlineCallbacks def on_POST(self, origin, content, query, group_id, category_id, room_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") if category_id == "": raise SynapseError(400, "category_id cannot be empty string") resp = yield self.handler.update_group_summary_room( group_id, requester_user_id, room_id=room_id, category_id=category_id, content=content, ) defer.returnValue((200, resp)) @defer.inlineCallbacks def on_DELETE(self, origin, content, query, group_id, category_id, room_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") if category_id == "": raise SynapseError(400, "category_id cannot be empty string") resp = yield self.handler.delete_group_summary_room( group_id, requester_user_id, room_id=room_id, category_id=category_id, ) defer.returnValue((200, resp)) class FederationGroupsCategoriesServlet(BaseFederationServlet): """Get all categories for a group """ PATH = ( "/groups/(?P[^/]*)/categories/$" ) @defer.inlineCallbacks def on_GET(self, origin, content, query, group_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") resp = yield self.handler.get_group_categories( group_id, requester_user_id, ) defer.returnValue((200, resp)) class FederationGroupsCategoryServlet(BaseFederationServlet): """Add/remove/get a category in a group """ PATH = ( "/groups/(?P[^/]*)/categories/(?P[^/]+)$" ) @defer.inlineCallbacks def on_GET(self, origin, content, query, group_id, category_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") resp = yield self.handler.get_group_category( group_id, requester_user_id, category_id ) defer.returnValue((200, resp)) @defer.inlineCallbacks def on_POST(self, origin, content, query, group_id, category_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") if category_id == "": raise SynapseError(400, "category_id cannot be empty string") resp = yield self.handler.upsert_group_category( group_id, requester_user_id, category_id, content, ) defer.returnValue((200, resp)) @defer.inlineCallbacks def on_DELETE(self, origin, content, query, group_id, category_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") if category_id == "": raise SynapseError(400, "category_id cannot be empty string") resp = yield self.handler.delete_group_category( group_id, requester_user_id, category_id, ) defer.returnValue((200, resp)) class FederationGroupsRolesServlet(BaseFederationServlet): """Get roles in a group """ PATH = ( "/groups/(?P[^/]*)/roles/$" ) @defer.inlineCallbacks def on_GET(self, origin, content, query, group_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") resp = yield self.handler.get_group_roles( group_id, requester_user_id, ) defer.returnValue((200, resp)) class FederationGroupsRoleServlet(BaseFederationServlet): """Add/remove/get a role in a group """ PATH = ( "/groups/(?P[^/]*)/roles/(?P[^/]+)$" ) @defer.inlineCallbacks def on_GET(self, origin, content, query, group_id, role_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") resp = yield self.handler.get_group_role( group_id, requester_user_id, role_id ) defer.returnValue((200, resp)) @defer.inlineCallbacks def on_POST(self, origin, content, query, group_id, role_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") if role_id == "": raise SynapseError(400, "role_id cannot be empty string") resp = yield self.handler.update_group_role( group_id, requester_user_id, role_id, content, ) defer.returnValue((200, resp)) @defer.inlineCallbacks def on_DELETE(self, origin, content, query, group_id, role_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") if role_id == "": raise SynapseError(400, "role_id cannot be empty string") resp = yield self.handler.delete_group_role( group_id, requester_user_id, role_id, ) defer.returnValue((200, resp)) class FederationGroupsSummaryUsersServlet(BaseFederationServlet): """Add/remove a user from the group summary, with optional role. Matches both: - /groups/:group/summary/users/:user_id - /groups/:group/summary/roles/:role/users/:user_id """ PATH = ( "/groups/(?P[^/]*)/summary" "(/roles/(?P[^/]+))?" "/users/(?P[^/]*)$" ) @defer.inlineCallbacks def on_POST(self, origin, content, query, group_id, role_id, user_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") if role_id == "": raise SynapseError(400, "role_id cannot be empty string") resp = yield self.handler.update_group_summary_user( group_id, requester_user_id, user_id=user_id, role_id=role_id, content=content, ) defer.returnValue((200, resp)) @defer.inlineCallbacks def on_DELETE(self, origin, content, query, group_id, role_id, user_id): requester_user_id = parse_string_from_args(query, "requester_user_id") if get_domain_from_id(requester_user_id) != origin: raise SynapseError(403, "requester_user_id doesn't match origin") if role_id == "": raise SynapseError(400, "role_id cannot be empty string") resp = yield self.handler.delete_group_summary_user( group_id, requester_user_id, user_id=user_id, role_id=role_id, ) defer.returnValue((200, resp)) class FederationGroupsBulkPublicisedServlet(BaseFederationServlet): """Get roles in a group """ PATH = ( "/get_groups_publicised$" ) @defer.inlineCallbacks def on_POST(self, origin, content, query): resp = yield self.handler.bulk_get_publicised_groups( content["user_ids"], proxy=False, ) defer.returnValue((200, resp)) FEDERATION_SERVLET_CLASSES = ( FederationSendServlet, FederationPullServlet, FederationEventServlet, FederationStateServlet, FederationStateIdsServlet, FederationBackfillServlet, FederationQueryServlet, FederationMakeJoinServlet, FederationMakeLeaveServlet, FederationEventServlet, FederationSendJoinServlet, FederationSendLeaveServlet, FederationInviteServlet, FederationQueryAuthServlet, FederationGetMissingEventsServlet, FederationEventAuthServlet, FederationClientKeysQueryServlet, FederationUserDevicesQueryServlet, FederationClientKeysClaimServlet, FederationThirdPartyInviteExchangeServlet, On3pidBindServlet, OpenIdUserInfo, FederationVersionServlet, ) ROOM_LIST_CLASSES = ( PublicRoomList, ) GROUP_SERVER_SERVLET_CLASSES = ( FederationGroupsProfileServlet, FederationGroupsSummaryServlet, FederationGroupsRoomsServlet, FederationGroupsUsersServlet, FederationGroupsInvitedUsersServlet, FederationGroupsInviteServlet, FederationGroupsAcceptInviteServlet, FederationGroupsRemoveUserServlet, FederationGroupsSummaryRoomsServlet, FederationGroupsCategoriesServlet, FederationGroupsCategoryServlet, FederationGroupsRolesServlet, FederationGroupsRoleServlet, FederationGroupsSummaryUsersServlet, ) GROUP_LOCAL_SERVLET_CLASSES = ( FederationGroupsLocalInviteServlet, FederationGroupsRemoveLocalUserServlet, FederationGroupsBulkPublicisedServlet, ) GROUP_ATTESTATION_SERVLET_CLASSES = ( FederationGroupsRenewAttestaionServlet, ) def register_servlets(hs, resource, authenticator, ratelimiter): for servletclass in FEDERATION_SERVLET_CLASSES: servletclass( handler=hs.get_replication_layer(), authenticator=authenticator, ratelimiter=ratelimiter, server_name=hs.hostname, ).register(resource) for servletclass in ROOM_LIST_CLASSES: servletclass( handler=hs.get_room_list_handler(), authenticator=authenticator, ratelimiter=ratelimiter, server_name=hs.hostname, ).register(resource) for servletclass in GROUP_SERVER_SERVLET_CLASSES: servletclass( handler=hs.get_groups_server_handler(), authenticator=authenticator, ratelimiter=ratelimiter, server_name=hs.hostname, ).register(resource) for servletclass in GROUP_LOCAL_SERVLET_CLASSES: servletclass( handler=hs.get_groups_local_handler(), authenticator=authenticator, ratelimiter=ratelimiter, server_name=hs.hostname, ).register(resource) for servletclass in GROUP_ATTESTATION_SERVLET_CLASSES: servletclass( handler=hs.get_groups_attestation_renewer(), authenticator=authenticator, ratelimiter=ratelimiter, server_name=hs.hostname, ).register(resource) synapse-0.24.0/synapse/federation/units.py000066400000000000000000000063171317335640100206070ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Defines the JSON structure of the protocol units used by the server to server protocol. """ from synapse.util.jsonobject import JsonEncodedObject import logging logger = logging.getLogger(__name__) class Edu(JsonEncodedObject): """ An Edu represents a piece of data sent from one homeserver to another. In comparison to Pdus, Edus are not persisted for a long time on disk, are not meaningful beyond a given pair of homeservers, and don't have an internal ID or previous references graph. """ valid_keys = [ "origin", "destination", "edu_type", "content", ] required_keys = [ "edu_type", ] internal_keys = [ "origin", "destination", ] class Transaction(JsonEncodedObject): """ A transaction is a list of Pdus and Edus to be sent to a remote home server with some extra metadata. Example transaction:: { "origin": "foo", "prev_ids": ["abc", "def"], "pdus": [ ... ], } """ valid_keys = [ "transaction_id", "origin", "destination", "origin_server_ts", "previous_ids", "pdus", "edus", "transaction_id", "destination", "pdu_failures", ] internal_keys = [ "transaction_id", "destination", ] required_keys = [ "transaction_id", "origin", "destination", "origin_server_ts", "pdus", ] def __init__(self, transaction_id=None, pdus=[], **kwargs): """ If we include a list of pdus then we decode then as PDU's automatically. """ # If there's no EDUs then remove the arg if "edus" in kwargs and not kwargs["edus"]: del kwargs["edus"] super(Transaction, self).__init__( transaction_id=transaction_id, pdus=pdus, **kwargs ) @staticmethod def create_new(pdus, **kwargs): """ Used to create a new transaction. Will auto fill out transaction_id and origin_server_ts keys. """ if "origin_server_ts" not in kwargs: raise KeyError( "Require 'origin_server_ts' to construct a Transaction" ) if "transaction_id" not in kwargs: raise KeyError( "Require 'transaction_id' to construct a Transaction" ) for p in pdus: p.transaction_id = kwargs["transaction_id"] kwargs["pdus"] = [p.get_pdu_json() for p in pdus] return Transaction(**kwargs) synapse-0.24.0/synapse/groups/000077500000000000000000000000001317335640100162635ustar00rootroot00000000000000synapse-0.24.0/synapse/groups/__init__.py000066400000000000000000000000001317335640100203620ustar00rootroot00000000000000synapse-0.24.0/synapse/groups/attestations.py000066400000000000000000000126231317335640100213630ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.errors import SynapseError from synapse.types import get_domain_from_id from synapse.util.logcontext import preserve_fn from signedjson.sign import sign_json # Default validity duration for new attestations we create DEFAULT_ATTESTATION_LENGTH_MS = 3 * 24 * 60 * 60 * 1000 # Start trying to update our attestations when they come this close to expiring UPDATE_ATTESTATION_TIME_MS = 1 * 24 * 60 * 60 * 1000 class GroupAttestationSigning(object): """Creates and verifies group attestations. """ def __init__(self, hs): self.keyring = hs.get_keyring() self.clock = hs.get_clock() self.server_name = hs.hostname self.signing_key = hs.config.signing_key[0] @defer.inlineCallbacks def verify_attestation(self, attestation, group_id, user_id, server_name=None): """Verifies that the given attestation matches the given parameters. An optional server_name can be supplied to explicitly set which server's signature is expected. Otherwise assumes that either the group_id or user_id is local and uses the other's server as the one to check. """ if not server_name: if get_domain_from_id(group_id) == self.server_name: server_name = get_domain_from_id(user_id) elif get_domain_from_id(user_id) == self.server_name: server_name = get_domain_from_id(group_id) else: raise Exception("Expected either group_id or user_id to be local") if user_id != attestation["user_id"]: raise SynapseError(400, "Attestation has incorrect user_id") if group_id != attestation["group_id"]: raise SynapseError(400, "Attestation has incorrect group_id") valid_until_ms = attestation["valid_until_ms"] # TODO: We also want to check that *new* attestations that people give # us to store are valid for at least a little while. if valid_until_ms < self.clock.time_msec(): raise SynapseError(400, "Attestation expired") yield self.keyring.verify_json_for_server(server_name, attestation) def create_attestation(self, group_id, user_id): """Create an attestation for the group_id and user_id with default validity length. """ return sign_json({ "group_id": group_id, "user_id": user_id, "valid_until_ms": self.clock.time_msec() + DEFAULT_ATTESTATION_LENGTH_MS, }, self.server_name, self.signing_key) class GroupAttestionRenewer(object): """Responsible for sending and receiving attestation updates. """ def __init__(self, hs): self.clock = hs.get_clock() self.store = hs.get_datastore() self.assestations = hs.get_groups_attestation_signing() self.transport_client = hs.get_federation_transport_client() self.is_mine_id = hs.is_mine_id self.attestations = hs.get_groups_attestation_signing() self._renew_attestations_loop = self.clock.looping_call( self._renew_attestations, 30 * 60 * 1000, ) @defer.inlineCallbacks def on_renew_attestation(self, group_id, user_id, content): """When a remote updates an attestation """ attestation = content["attestation"] if not self.is_mine_id(group_id) and not self.is_mine_id(user_id): raise SynapseError(400, "Neither user not group are on this server") yield self.attestations.verify_attestation( attestation, user_id=user_id, group_id=group_id, ) yield self.store.update_remote_attestion(group_id, user_id, attestation) defer.returnValue({}) @defer.inlineCallbacks def _renew_attestations(self): """Called periodically to check if we need to update any of our attestations """ now = self.clock.time_msec() rows = yield self.store.get_attestations_need_renewals( now + UPDATE_ATTESTATION_TIME_MS ) @defer.inlineCallbacks def _renew_attestation(group_id, user_id): attestation = self.attestations.create_attestation(group_id, user_id) if self.is_mine_id(group_id): destination = get_domain_from_id(user_id) else: destination = get_domain_from_id(group_id) yield self.transport_client.renew_group_attestation( destination, group_id, user_id, content={"attestation": attestation}, ) yield self.store.update_attestation_renewal( group_id, user_id, attestation ) for row in rows: group_id = row["group_id"] user_id = row["user_id"] preserve_fn(_renew_attestation)(group_id, user_id) synapse-0.24.0/synapse/groups/groups_server.py000066400000000000000000000636321317335640100215540ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.errors import SynapseError from synapse.types import UserID, get_domain_from_id, RoomID, GroupID import logging import urllib logger = logging.getLogger(__name__) # TODO: Allow users to "knock" or simpkly join depending on rules # TODO: Federation admin APIs # TODO: is_priveged flag to users and is_public to users and rooms # TODO: Audit log for admins (profile updates, membership changes, users who tried # to join but were rejected, etc) # TODO: Flairs class GroupsServerHandler(object): def __init__(self, hs): self.hs = hs self.store = hs.get_datastore() self.room_list_handler = hs.get_room_list_handler() self.auth = hs.get_auth() self.clock = hs.get_clock() self.keyring = hs.get_keyring() self.is_mine_id = hs.is_mine_id self.signing_key = hs.config.signing_key[0] self.server_name = hs.hostname self.attestations = hs.get_groups_attestation_signing() self.transport_client = hs.get_federation_transport_client() self.profile_handler = hs.get_profile_handler() # Ensure attestations get renewed hs.get_groups_attestation_renewer() @defer.inlineCallbacks def check_group_is_ours(self, group_id, and_exists=False, and_is_admin=None): """Check that the group is ours, and optionally if it exists. If group does exist then return group. Args: group_id (str) and_exists (bool): whether to also check if group exists and_is_admin (str): whether to also check if given str is a user_id that is an admin """ if not self.is_mine_id(group_id): raise SynapseError(400, "Group not on this server") group = yield self.store.get_group(group_id) if and_exists and not group: raise SynapseError(404, "Unknown group") if and_is_admin: is_admin = yield self.store.is_user_admin_in_group(group_id, and_is_admin) if not is_admin: raise SynapseError(403, "User is not admin in group") defer.returnValue(group) @defer.inlineCallbacks def get_group_summary(self, group_id, requester_user_id): """Get the summary for a group as seen by requester_user_id. The group summary consists of the profile of the room, and a curated list of users and rooms. These list *may* be organised by role/category. The roles/categories are ordered, and so are the users/rooms within them. A user/room may appear in multiple roles/categories. """ yield self.check_group_is_ours(group_id, and_exists=True) is_user_in_group = yield self.store.is_user_in_group(requester_user_id, group_id) profile = yield self.get_group_profile(group_id, requester_user_id) users, roles = yield self.store.get_users_for_summary_by_role( group_id, include_private=is_user_in_group, ) # TODO: Add profiles to users rooms, categories = yield self.store.get_rooms_for_summary_by_category( group_id, include_private=is_user_in_group, ) for room_entry in rooms: room_id = room_entry["room_id"] joined_users = yield self.store.get_users_in_room(room_id) entry = yield self.room_list_handler.generate_room_entry( room_id, len(joined_users), with_alias=False, allow_private=True, ) entry = dict(entry) # so we don't change whats cached entry.pop("room_id", None) room_entry["profile"] = entry rooms.sort(key=lambda e: e.get("order", 0)) for entry in users: user_id = entry["user_id"] if not self.is_mine_id(requester_user_id): attestation = yield self.store.get_remote_attestation(group_id, user_id) if not attestation: continue entry["attestation"] = attestation else: entry["attestation"] = self.attestations.create_attestation( group_id, user_id, ) user_profile = yield self.profile_handler.get_profile_from_cache(user_id) entry.update(user_profile) users.sort(key=lambda e: e.get("order", 0)) membership_info = yield self.store.get_users_membership_info_in_group( group_id, requester_user_id, ) defer.returnValue({ "profile": profile, "users_section": { "users": users, "roles": roles, "total_user_count_estimate": 0, # TODO }, "rooms_section": { "rooms": rooms, "categories": categories, "total_room_count_estimate": 0, # TODO }, "user": membership_info, }) @defer.inlineCallbacks def update_group_summary_room(self, group_id, user_id, room_id, category_id, content): """Add/update a room to the group summary """ yield self.check_group_is_ours(group_id, and_exists=True, and_is_admin=user_id) RoomID.from_string(room_id) # Ensure valid room id order = content.get("order", None) is_public = _parse_visibility_from_contents(content) yield self.store.add_room_to_summary( group_id=group_id, room_id=room_id, category_id=category_id, order=order, is_public=is_public, ) defer.returnValue({}) @defer.inlineCallbacks def delete_group_summary_room(self, group_id, user_id, room_id, category_id): """Remove a room from the summary """ yield self.check_group_is_ours(group_id, and_exists=True, and_is_admin=user_id) yield self.store.remove_room_from_summary( group_id=group_id, room_id=room_id, category_id=category_id, ) defer.returnValue({}) @defer.inlineCallbacks def get_group_categories(self, group_id, user_id): """Get all categories in a group (as seen by user) """ yield self.check_group_is_ours(group_id, and_exists=True) categories = yield self.store.get_group_categories( group_id=group_id, ) defer.returnValue({"categories": categories}) @defer.inlineCallbacks def get_group_category(self, group_id, user_id, category_id): """Get a specific category in a group (as seen by user) """ yield self.check_group_is_ours(group_id, and_exists=True) res = yield self.store.get_group_category( group_id=group_id, category_id=category_id, ) defer.returnValue(res) @defer.inlineCallbacks def update_group_category(self, group_id, user_id, category_id, content): """Add/Update a group category """ yield self.check_group_is_ours(group_id, and_exists=True, and_is_admin=user_id) is_public = _parse_visibility_from_contents(content) profile = content.get("profile") yield self.store.upsert_group_category( group_id=group_id, category_id=category_id, is_public=is_public, profile=profile, ) defer.returnValue({}) @defer.inlineCallbacks def delete_group_category(self, group_id, user_id, category_id): """Delete a group category """ yield self.check_group_is_ours(group_id, and_exists=True, and_is_admin=user_id) yield self.store.remove_group_category( group_id=group_id, category_id=category_id, ) defer.returnValue({}) @defer.inlineCallbacks def get_group_roles(self, group_id, user_id): """Get all roles in a group (as seen by user) """ yield self.check_group_is_ours(group_id, and_exists=True) roles = yield self.store.get_group_roles( group_id=group_id, ) defer.returnValue({"roles": roles}) @defer.inlineCallbacks def get_group_role(self, group_id, user_id, role_id): """Get a specific role in a group (as seen by user) """ yield self.check_group_is_ours(group_id, and_exists=True) res = yield self.store.get_group_role( group_id=group_id, role_id=role_id, ) defer.returnValue(res) @defer.inlineCallbacks def update_group_role(self, group_id, user_id, role_id, content): """Add/update a role in a group """ yield self.check_group_is_ours(group_id, and_exists=True, and_is_admin=user_id) is_public = _parse_visibility_from_contents(content) profile = content.get("profile") yield self.store.upsert_group_role( group_id=group_id, role_id=role_id, is_public=is_public, profile=profile, ) defer.returnValue({}) @defer.inlineCallbacks def delete_group_role(self, group_id, user_id, role_id): """Remove role from group """ yield self.check_group_is_ours(group_id, and_exists=True, and_is_admin=user_id) yield self.store.remove_group_role( group_id=group_id, role_id=role_id, ) defer.returnValue({}) @defer.inlineCallbacks def update_group_summary_user(self, group_id, requester_user_id, user_id, role_id, content): """Add/update a users entry in the group summary """ yield self.check_group_is_ours( group_id, and_exists=True, and_is_admin=requester_user_id, ) order = content.get("order", None) is_public = _parse_visibility_from_contents(content) yield self.store.add_user_to_summary( group_id=group_id, user_id=user_id, role_id=role_id, order=order, is_public=is_public, ) defer.returnValue({}) @defer.inlineCallbacks def delete_group_summary_user(self, group_id, requester_user_id, user_id, role_id): """Remove a user from the group summary """ yield self.check_group_is_ours( group_id, and_exists=True, and_is_admin=requester_user_id, ) yield self.store.remove_user_from_summary( group_id=group_id, user_id=user_id, role_id=role_id, ) defer.returnValue({}) @defer.inlineCallbacks def get_group_profile(self, group_id, requester_user_id): """Get the group profile as seen by requester_user_id """ yield self.check_group_is_ours(group_id) group_description = yield self.store.get_group(group_id) if group_description: defer.returnValue(group_description) else: raise SynapseError(404, "Unknown group") @defer.inlineCallbacks def update_group_profile(self, group_id, requester_user_id, content): """Update the group profile """ yield self.check_group_is_ours( group_id, and_exists=True, and_is_admin=requester_user_id, ) profile = {} for keyname in ("name", "avatar_url", "short_description", "long_description"): if keyname in content: value = content[keyname] if not isinstance(value, basestring): raise SynapseError(400, "%r value is not a string" % (keyname,)) profile[keyname] = value yield self.store.update_group_profile(group_id, profile) @defer.inlineCallbacks def get_users_in_group(self, group_id, requester_user_id): """Get the users in group as seen by requester_user_id. The ordering is arbitrary at the moment """ yield self.check_group_is_ours(group_id, and_exists=True) is_user_in_group = yield self.store.is_user_in_group(requester_user_id, group_id) user_results = yield self.store.get_users_in_group( group_id, include_private=is_user_in_group, ) chunk = [] for user_result in user_results: g_user_id = user_result["user_id"] is_public = user_result["is_public"] entry = {"user_id": g_user_id} profile = yield self.profile_handler.get_profile_from_cache(g_user_id) entry.update(profile) if not is_public: entry["is_public"] = False if not self.is_mine_id(g_user_id): attestation = yield self.store.get_remote_attestation(group_id, g_user_id) if not attestation: continue entry["attestation"] = attestation else: entry["attestation"] = self.attestations.create_attestation( group_id, g_user_id, ) chunk.append(entry) # TODO: If admin add lists of users whose attestations have timed out defer.returnValue({ "chunk": chunk, "total_user_count_estimate": len(user_results), }) @defer.inlineCallbacks def get_invited_users_in_group(self, group_id, requester_user_id): """Get the users that have been invited to a group as seen by requester_user_id. The ordering is arbitrary at the moment """ yield self.check_group_is_ours(group_id, and_exists=True) is_user_in_group = yield self.store.is_user_in_group(requester_user_id, group_id) if not is_user_in_group: raise SynapseError(403, "User not in group") invited_users = yield self.store.get_invited_users_in_group(group_id) user_profiles = [] for user_id in invited_users: user_profile = { "user_id": user_id } try: profile = yield self.profile_handler.get_profile_from_cache(user_id) user_profile.update(profile) except Exception as e: logger.warn("Error getting profile for %s: %s", user_id, e) user_profiles.append(user_profile) defer.returnValue({ "chunk": user_profiles, "total_user_count_estimate": len(invited_users), }) @defer.inlineCallbacks def get_rooms_in_group(self, group_id, requester_user_id): """Get the rooms in group as seen by requester_user_id This returns rooms in order of decreasing number of joined users """ yield self.check_group_is_ours(group_id, and_exists=True) is_user_in_group = yield self.store.is_user_in_group(requester_user_id, group_id) room_results = yield self.store.get_rooms_in_group( group_id, include_private=is_user_in_group, ) chunk = [] for room_result in room_results: room_id = room_result["room_id"] is_public = room_result["is_public"] joined_users = yield self.store.get_users_in_room(room_id) entry = yield self.room_list_handler.generate_room_entry( room_id, len(joined_users), with_alias=False, allow_private=True, ) if not entry: continue if not is_public: entry["is_public"] = False chunk.append(entry) chunk.sort(key=lambda e: -e["num_joined_members"]) defer.returnValue({ "chunk": chunk, "total_room_count_estimate": len(room_results), }) @defer.inlineCallbacks def add_room_to_group(self, group_id, requester_user_id, room_id, content): """Add room to group """ RoomID.from_string(room_id) # Ensure valid room id yield self.check_group_is_ours( group_id, and_exists=True, and_is_admin=requester_user_id ) is_public = _parse_visibility_from_contents(content) yield self.store.add_room_to_group(group_id, room_id, is_public=is_public) defer.returnValue({}) @defer.inlineCallbacks def remove_room_from_group(self, group_id, requester_user_id, room_id): """Remove room from group """ yield self.check_group_is_ours( group_id, and_exists=True, and_is_admin=requester_user_id ) yield self.store.remove_room_from_group(group_id, room_id) defer.returnValue({}) @defer.inlineCallbacks def invite_to_group(self, group_id, user_id, requester_user_id, content): """Invite user to group """ group = yield self.check_group_is_ours( group_id, and_exists=True, and_is_admin=requester_user_id ) # TODO: Check if user knocked # TODO: Check if user is already invited content = { "profile": { "name": group["name"], "avatar_url": group["avatar_url"], }, "inviter": requester_user_id, } if self.hs.is_mine_id(user_id): groups_local = self.hs.get_groups_local_handler() res = yield groups_local.on_invite(group_id, user_id, content) local_attestation = None else: local_attestation = self.attestations.create_attestation(group_id, user_id) content.update({ "attestation": local_attestation, }) res = yield self.transport_client.invite_to_group_notification( get_domain_from_id(user_id), group_id, user_id, content ) user_profile = res.get("user_profile", {}) yield self.store.add_remote_profile_cache( user_id, displayname=user_profile.get("displayname"), avatar_url=user_profile.get("avatar_url"), ) if res["state"] == "join": if not self.hs.is_mine_id(user_id): remote_attestation = res["attestation"] yield self.attestations.verify_attestation( remote_attestation, user_id=user_id, group_id=group_id, ) else: remote_attestation = None yield self.store.add_user_to_group( group_id, user_id, is_admin=False, is_public=False, # TODO local_attestation=local_attestation, remote_attestation=remote_attestation, ) elif res["state"] == "invite": yield self.store.add_group_invite( group_id, user_id, ) defer.returnValue({ "state": "invite" }) elif res["state"] == "reject": defer.returnValue({ "state": "reject" }) else: raise SynapseError(502, "Unknown state returned by HS") @defer.inlineCallbacks def accept_invite(self, group_id, user_id, content): """User tries to accept an invite to the group. This is different from them asking to join, and so should error if no invite exists (and they're not a member of the group) """ yield self.check_group_is_ours(group_id, and_exists=True) if not self.store.is_user_invited_to_local_group(group_id, user_id): raise SynapseError(403, "User not invited to group") if not self.hs.is_mine_id(user_id): remote_attestation = content["attestation"] yield self.attestations.verify_attestation( remote_attestation, user_id=user_id, group_id=group_id, ) else: remote_attestation = None local_attestation = self.attestations.create_attestation(group_id, user_id) is_public = _parse_visibility_from_contents(content) yield self.store.add_user_to_group( group_id, user_id, is_admin=False, is_public=is_public, local_attestation=local_attestation, remote_attestation=remote_attestation, ) defer.returnValue({ "state": "join", "attestation": local_attestation, }) @defer.inlineCallbacks def knock(self, group_id, user_id, content): """A user requests becoming a member of the group """ yield self.check_group_is_ours(group_id, and_exists=True) raise NotImplementedError() @defer.inlineCallbacks def accept_knock(self, group_id, user_id, content): """Accept a users knock to the room. Errors if the user hasn't knocked, rather than inviting them. """ yield self.check_group_is_ours(group_id, and_exists=True) raise NotImplementedError() @defer.inlineCallbacks def remove_user_from_group(self, group_id, user_id, requester_user_id, content): """Remove a user from the group; either a user is leaving or and admin kicked htem. """ yield self.check_group_is_ours(group_id, and_exists=True) is_kick = False if requester_user_id != user_id: is_admin = yield self.store.is_user_admin_in_group( group_id, requester_user_id ) if not is_admin: raise SynapseError(403, "User is not admin in group") is_kick = True yield self.store.remove_user_from_group( group_id, user_id, ) if is_kick: if self.hs.is_mine_id(user_id): groups_local = self.hs.get_groups_local_handler() yield groups_local.user_removed_from_group(group_id, user_id, {}) else: yield self.transport_client.remove_user_from_group_notification( get_domain_from_id(user_id), group_id, user_id, {} ) if not self.hs.is_mine_id(user_id): yield self.store.maybe_delete_remote_profile_cache(user_id) defer.returnValue({}) @defer.inlineCallbacks def create_group(self, group_id, user_id, content): group = yield self.check_group_is_ours(group_id) _validate_group_id(group_id) logger.info("Attempting to create group with ID: %r", group_id) if group: raise SynapseError(400, "Group already exists") is_admin = yield self.auth.is_server_admin(UserID.from_string(user_id)) if not is_admin: if not self.hs.config.enable_group_creation: raise SynapseError( 403, "Only server admin can create group on this server", ) localpart = GroupID.from_string(group_id).localpart if not localpart.startswith(self.hs.config.group_creation_prefix): raise SynapseError( 400, "Can only create groups with prefix %r on this server" % ( self.hs.config.group_creation_prefix, ), ) profile = content.get("profile", {}) name = profile.get("name") avatar_url = profile.get("avatar_url") short_description = profile.get("short_description") long_description = profile.get("long_description") user_profile = content.get("user_profile", {}) yield self.store.create_group( group_id, user_id, name=name, avatar_url=avatar_url, short_description=short_description, long_description=long_description, ) if not self.hs.is_mine_id(user_id): remote_attestation = content["attestation"] yield self.attestations.verify_attestation( remote_attestation, user_id=user_id, group_id=group_id, ) local_attestation = self.attestations.create_attestation(group_id, user_id) else: local_attestation = None remote_attestation = None yield self.store.add_user_to_group( group_id, user_id, is_admin=True, is_public=True, # TODO local_attestation=local_attestation, remote_attestation=remote_attestation, ) if not self.hs.is_mine_id(user_id): yield self.store.add_remote_profile_cache( user_id, displayname=user_profile.get("displayname"), avatar_url=user_profile.get("avatar_url"), ) defer.returnValue({ "group_id": group_id, }) def _parse_visibility_from_contents(content): """Given a content for a request parse out whether the entity should be public or not """ visibility = content.get("visibility") if visibility: vis_type = visibility["type"] if vis_type not in ("public", "private"): raise SynapseError( 400, "Synapse only supports 'public'/'private' visibility" ) is_public = vis_type == "public" else: is_public = True return is_public def _validate_group_id(group_id): """Validates the group ID is valid for creation on this home server """ localpart = GroupID.from_string(group_id).localpart if localpart.lower() != localpart: raise SynapseError(400, "Group ID must be lower case") if urllib.quote(localpart.encode('utf-8')) != localpart: raise SynapseError( 400, "Group ID can only contain characters a-z, 0-9, or '_-./'", ) synapse-0.24.0/synapse/handlers/000077500000000000000000000000001317335640100165445ustar00rootroot00000000000000synapse-0.24.0/synapse/handlers/__init__.py000066400000000000000000000044161317335640100206620ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from .register import RegistrationHandler from .room import ( RoomCreationHandler, RoomContextHandler, ) from .room_member import RoomMemberHandler from .message import MessageHandler from .federation import FederationHandler from .directory import DirectoryHandler from .admin import AdminHandler from .identity import IdentityHandler from .search import SearchHandler class Handlers(object): """ Deprecated. A collection of handlers. At some point most of the classes whose name ended "Handler" were accessed through this class. However this makes it painful to unit test the handlers and to run cut down versions of synapse that only use specific handlers because using a single handler required creating all of the handlers. So some of the handlers have been lifted out of the Handlers object and are now accessed directly through the homeserver object itself. Any new handlers should follow the new pattern of being accessed through the homeserver object and should not be added to the Handlers object. The remaining handlers should be moved out of the handlers object. """ def __init__(self, hs): self.registration_handler = RegistrationHandler(hs) self.message_handler = MessageHandler(hs) self.room_creation_handler = RoomCreationHandler(hs) self.room_member_handler = RoomMemberHandler(hs) self.federation_handler = FederationHandler(hs) self.directory_handler = DirectoryHandler(hs) self.admin_handler = AdminHandler(hs) self.identity_handler = IdentityHandler(hs) self.search_handler = SearchHandler(hs) self.room_context_handler = RoomContextHandler(hs) synapse-0.24.0/synapse/handlers/_base.py000066400000000000000000000143411317335640100201720ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from twisted.internet import defer import synapse.types from synapse.api.constants import Membership, EventTypes from synapse.api.errors import LimitExceededError from synapse.types import UserID logger = logging.getLogger(__name__) class BaseHandler(object): """ Common base class for the event handlers. Attributes: store (synapse.storage.DataStore): state_handler (synapse.state.StateHandler): """ def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): """ self.store = hs.get_datastore() self.auth = hs.get_auth() self.notifier = hs.get_notifier() self.state_handler = hs.get_state_handler() self.distributor = hs.get_distributor() self.ratelimiter = hs.get_ratelimiter() self.clock = hs.get_clock() self.hs = hs self.server_name = hs.hostname self.event_builder_factory = hs.get_event_builder_factory() @defer.inlineCallbacks def ratelimit(self, requester, update=True): """Ratelimits requests. Args: requester (Requester) update (bool): Whether to record that a request is being processed. Set to False when doing multiple checks for one request (e.g. to check up front if we would reject the request), and set to True for the last call for a given request. Raises: LimitExceededError if the request should be ratelimited """ time_now = self.clock.time() user_id = requester.user.to_string() # The AS user itself is never rate limited. app_service = self.store.get_app_service_by_user_id(user_id) if app_service is not None: return # do not ratelimit app service senders # Disable rate limiting of users belonging to any AS that is configured # not to be rate limited in its registration file (rate_limited: true|false). if requester.app_service and not requester.app_service.is_rate_limited(): return # Check if there is a per user override in the DB. override = yield self.store.get_ratelimit_for_user(user_id) if override: # If overriden with a null Hz then ratelimiting has been entirely # disabled for the user if not override.messages_per_second: return messages_per_second = override.messages_per_second burst_count = override.burst_count else: messages_per_second = self.hs.config.rc_messages_per_second burst_count = self.hs.config.rc_message_burst_count allowed, time_allowed = self.ratelimiter.send_message( user_id, time_now, msg_rate_hz=messages_per_second, burst_count=burst_count, update=update, ) if not allowed: raise LimitExceededError( retry_after_ms=int(1000 * (time_allowed - time_now)), ) @defer.inlineCallbacks def maybe_kick_guest_users(self, event, context=None): # Technically this function invalidates current_state by changing it. # Hopefully this isn't that important to the caller. if event.type == EventTypes.GuestAccess: guest_access = event.content.get("guest_access", "forbidden") if guest_access != "can_join": if context: current_state = yield self.store.get_events( context.current_state_ids.values() ) else: current_state = yield self.state_handler.get_current_state( event.room_id ) current_state = current_state.values() logger.info("maybe_kick_guest_users %r", current_state) yield self.kick_guest_users(current_state) @defer.inlineCallbacks def kick_guest_users(self, current_state): for member_event in current_state: try: if member_event.type != EventTypes.Member: continue target_user = UserID.from_string(member_event.state_key) if not self.hs.is_mine(target_user): continue if member_event.content["membership"] not in { Membership.JOIN, Membership.INVITE }: continue if ( "kind" not in member_event.content or member_event.content["kind"] != "guest" ): continue # We make the user choose to leave, rather than have the # event-sender kick them. This is partially because we don't # need to worry about power levels, and partially because guest # users are a concept which doesn't hugely work over federation, # and having homeservers have their own users leave keeps more # of that decision-making and control local to the guest-having # homeserver. requester = synapse.types.create_requester( target_user, is_guest=True) handler = self.hs.get_handlers().room_member_handler yield handler.update_membership( requester, target_user, member_event.room_id, "leave", ratelimit=False, ) except Exception as e: logger.warn("Error kicking guest user: %s" % (e,)) synapse-0.24.0/synapse/handlers/account_data.py000066400000000000000000000042601317335640100215450ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer class AccountDataEventSource(object): def __init__(self, hs): self.store = hs.get_datastore() def get_current_key(self, direction='f'): return self.store.get_max_account_data_stream_id() @defer.inlineCallbacks def get_new_events(self, user, from_key, **kwargs): user_id = user.to_string() last_stream_id = from_key current_stream_id = yield self.store.get_max_account_data_stream_id() results = [] tags = yield self.store.get_updated_tags(user_id, last_stream_id) for room_id, room_tags in tags.items(): results.append({ "type": "m.tag", "content": {"tags": room_tags}, "room_id": room_id, }) account_data, room_account_data = ( yield self.store.get_updated_account_data_for_user(user_id, last_stream_id) ) for account_data_type, content in account_data.items(): results.append({ "type": account_data_type, "content": content, }) for room_id, account_data in room_account_data.items(): for account_data_type, content in account_data.items(): results.append({ "type": account_data_type, "content": content, "room_id": room_id, }) defer.returnValue((results, current_stream_id)) @defer.inlineCallbacks def get_pagination_rows(self, user, config, key): defer.returnValue(([], config.to_id)) synapse-0.24.0/synapse/handlers/admin.py000066400000000000000000000055611317335640100202150ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from ._base import BaseHandler import logging logger = logging.getLogger(__name__) class AdminHandler(BaseHandler): def __init__(self, hs): super(AdminHandler, self).__init__(hs) @defer.inlineCallbacks def get_whois(self, user): connections = [] sessions = yield self.store.get_user_ip_and_agents(user) for session in sessions: connections.append({ "ip": session["ip"], "last_seen": session["last_seen"], "user_agent": session["user_agent"], }) ret = { "user_id": user.to_string(), "devices": { "": { "sessions": [ { "connections": connections, } ] }, }, } defer.returnValue(ret) @defer.inlineCallbacks def get_users(self): """Function to reterive a list of users in users table. Args: Returns: defer.Deferred: resolves to list[dict[str, Any]] """ ret = yield self.store.get_users() defer.returnValue(ret) @defer.inlineCallbacks def get_users_paginate(self, order, start, limit): """Function to reterive a paginated list of users from users list. This will return a json object, which contains list of users and the total number of users in users table. Args: order (str): column name to order the select by this column start (int): start number to begin the query from limit (int): number of rows to reterive Returns: defer.Deferred: resolves to json object {list[dict[str, Any]], count} """ ret = yield self.store.get_users_paginate(order, start, limit) defer.returnValue(ret) @defer.inlineCallbacks def search_users(self, term): """Function to search users list for one or more users with the matched term. Args: term (str): search term Returns: defer.Deferred: resolves to list[dict[str, Any]] """ ret = yield self.store.search_users(term) defer.returnValue(ret) synapse-0.24.0/synapse/handlers/appservice.py000066400000000000000000000230051317335640100212570ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.constants import EventTypes from synapse.util.metrics import Measure from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred import logging logger = logging.getLogger(__name__) def log_failure(failure): logger.error( "Application Services Failure", exc_info=( failure.type, failure.value, failure.getTracebackObject() ) ) class ApplicationServicesHandler(object): def __init__(self, hs): self.store = hs.get_datastore() self.is_mine_id = hs.is_mine_id self.appservice_api = hs.get_application_service_api() self.scheduler = hs.get_application_service_scheduler() self.started_scheduler = False self.clock = hs.get_clock() self.notify_appservices = hs.config.notify_appservices self.current_max = 0 self.is_processing = False @defer.inlineCallbacks def notify_interested_services(self, current_id): """Notifies (pushes) all application services interested in this event. Pushing is done asynchronously, so this method won't block for any prolonged length of time. Args: current_id(int): The current maximum ID. """ services = self.store.get_app_services() if not services or not self.notify_appservices: return self.current_max = max(self.current_max, current_id) if self.is_processing: return with Measure(self.clock, "notify_interested_services"): self.is_processing = True try: upper_bound = self.current_max limit = 100 while True: upper_bound, events = yield self.store.get_new_events_for_appservice( upper_bound, limit ) if not events: break for event in events: # Gather interested services services = yield self._get_services_for_event(event) if len(services) == 0: continue # no services need notifying # Do we know this user exists? If not, poke the user # query API for all services which match that user regex. # This needs to block as these user queries need to be # made BEFORE pushing the event. yield self._check_user_exists(event.sender) if event.type == EventTypes.Member: yield self._check_user_exists(event.state_key) if not self.started_scheduler: self.scheduler.start().addErrback(log_failure) self.started_scheduler = True # Fork off pushes to these services for service in services: preserve_fn(self.scheduler.submit_event_for_as)( service, event ) yield self.store.set_appservice_last_pos(upper_bound) if len(events) < limit: break finally: self.is_processing = False @defer.inlineCallbacks def query_user_exists(self, user_id): """Check if any application service knows this user_id exists. Args: user_id(str): The user to query if they exist on any AS. Returns: True if this user exists on at least one application service. """ user_query_services = yield self._get_services_for_user( user_id=user_id ) for user_service in user_query_services: is_known_user = yield self.appservice_api.query_user( user_service, user_id ) if is_known_user: defer.returnValue(True) defer.returnValue(False) @defer.inlineCallbacks def query_room_alias_exists(self, room_alias): """Check if an application service knows this room alias exists. Args: room_alias(RoomAlias): The room alias to query. Returns: namedtuple: with keys "room_id" and "servers" or None if no association can be found. """ room_alias_str = room_alias.to_string() services = self.store.get_app_services() alias_query_services = [ s for s in services if ( s.is_interested_in_alias(room_alias_str) ) ] for alias_service in alias_query_services: is_known_alias = yield self.appservice_api.query_alias( alias_service, room_alias_str ) if is_known_alias: # the alias exists now so don't query more ASes. result = yield self.store.get_association_from_room_alias( room_alias ) defer.returnValue(result) @defer.inlineCallbacks def query_3pe(self, kind, protocol, fields): services = yield self._get_services_for_3pn(protocol) results = yield preserve_context_over_deferred(defer.DeferredList([ preserve_fn(self.appservice_api.query_3pe)(service, kind, protocol, fields) for service in services ], consumeErrors=True)) ret = [] for (success, result) in results: if success: ret.extend(result) defer.returnValue(ret) @defer.inlineCallbacks def get_3pe_protocols(self, only_protocol=None): services = self.store.get_app_services() protocols = {} # Collect up all the individual protocol responses out of the ASes for s in services: for p in s.protocols: if only_protocol is not None and p != only_protocol: continue if p not in protocols: protocols[p] = [] info = yield self.appservice_api.get_3pe_protocol(s, p) if info is not None: protocols[p].append(info) def _merge_instances(infos): if not infos: return {} # Merge the 'instances' lists of multiple results, but just take # the other fields from the first as they ought to be identical # copy the result so as not to corrupt the cached one combined = dict(infos[0]) combined["instances"] = list(combined["instances"]) for info in infos[1:]: combined["instances"].extend(info["instances"]) return combined for p in protocols.keys(): protocols[p] = _merge_instances(protocols[p]) defer.returnValue(protocols) @defer.inlineCallbacks def _get_services_for_event(self, event): """Retrieve a list of application services interested in this event. Args: event(Event): The event to check. Can be None if alias_list is not. Returns: list: A list of services interested in this event based on the service regex. """ services = self.store.get_app_services() interested_list = [ s for s in services if ( yield s.is_interested(event, self.store) ) ] defer.returnValue(interested_list) def _get_services_for_user(self, user_id): services = self.store.get_app_services() interested_list = [ s for s in services if ( s.is_interested_in_user(user_id) ) ] return defer.succeed(interested_list) def _get_services_for_3pn(self, protocol): services = self.store.get_app_services() interested_list = [ s for s in services if s.is_interested_in_protocol(protocol) ] return defer.succeed(interested_list) @defer.inlineCallbacks def _is_unknown_user(self, user_id): if not self.is_mine_id(user_id): # we don't know if they are unknown or not since it isn't one of our # users. We can't poke ASes. defer.returnValue(False) return user_info = yield self.store.get_user_by_id(user_id) if user_info: defer.returnValue(False) return # user not found; could be the AS though, so check. services = self.store.get_app_services() service_list = [s for s in services if s.sender == user_id] defer.returnValue(len(service_list) == 0) @defer.inlineCallbacks def _check_user_exists(self, user_id): unknown_user = yield self._is_unknown_user(user_id) if unknown_user: exists = yield self.query_user_exists(user_id) defer.returnValue(exists) defer.returnValue(True) synapse-0.24.0/synapse/handlers/auth.py000066400000000000000000000653601317335640100200710ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from ._base import BaseHandler from synapse.api.constants import LoginType from synapse.types import UserID from synapse.api.errors import AuthError, LoginError, Codes, StoreError, SynapseError from synapse.util.async import run_on_reactor from synapse.util.caches.expiringcache import ExpiringCache from twisted.web.client import PartialDownloadError import logging import bcrypt import pymacaroons import simplejson import synapse.util.stringutils as stringutils logger = logging.getLogger(__name__) class AuthHandler(BaseHandler): SESSION_EXPIRE_MS = 48 * 60 * 60 * 1000 def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): """ super(AuthHandler, self).__init__(hs) self.checkers = { LoginType.PASSWORD: self._check_password_auth, LoginType.RECAPTCHA: self._check_recaptcha, LoginType.EMAIL_IDENTITY: self._check_email_identity, LoginType.MSISDN: self._check_msisdn, LoginType.DUMMY: self._check_dummy_auth, } self.bcrypt_rounds = hs.config.bcrypt_rounds # This is not a cache per se, but a store of all current sessions that # expire after N hours self.sessions = ExpiringCache( cache_name="register_sessions", clock=hs.get_clock(), expiry_ms=self.SESSION_EXPIRE_MS, reset_expiry_on_get=True, ) account_handler = _AccountHandler( hs, check_user_exists=self.check_user_exists ) self.password_providers = [ module(config=config, account_handler=account_handler) for module, config in hs.config.password_providers ] logger.info("Extra password_providers: %r", self.password_providers) self.hs = hs # FIXME better possibility to access registrationHandler later? self.device_handler = hs.get_device_handler() self.macaroon_gen = hs.get_macaroon_generator() @defer.inlineCallbacks def check_auth(self, flows, clientdict, clientip): """ Takes a dictionary sent by the client in the login / registration protocol and handles the login flow. As a side effect, this function fills in the 'creds' key on the user's session with a map, which maps each auth-type (str) to the relevant identity authenticated by that auth-type (mostly str, but for captcha, bool). Args: flows (list): A list of login flows. Each flow is an ordered list of strings representing auth-types. At least one full flow must be completed in order for auth to be successful. clientdict: The dictionary from the client root level, not the 'auth' key: this method prompts for auth if none is sent. clientip (str): The IP address of the client. Returns: A tuple of (authed, dict, dict, session_id) where authed is true if the client has successfully completed an auth flow. If it is true the first dict contains the authenticated credentials of each stage. If authed is false, the first dictionary is the server response to the login request and should be passed back to the client. In either case, the second dict contains the parameters for this request (which may have been given only in a previous call). session_id is the ID of this session, either passed in by the client or assigned by the call to check_auth """ authdict = None sid = None if clientdict and 'auth' in clientdict: authdict = clientdict['auth'] del clientdict['auth'] if 'session' in authdict: sid = authdict['session'] session = self._get_session_info(sid) if len(clientdict) > 0: # This was designed to allow the client to omit the parameters # and just supply the session in subsequent calls so it split # auth between devices by just sharing the session, (eg. so you # could continue registration from your phone having clicked the # email auth link on there). It's probably too open to abuse # because it lets unauthenticated clients store arbitrary objects # on a home server. # Revisit: Assumimg the REST APIs do sensible validation, the data # isn't arbintrary. session['clientdict'] = clientdict self._save_session(session) elif 'clientdict' in session: clientdict = session['clientdict'] if not authdict: defer.returnValue( ( False, self._auth_dict_for_flows(flows, session), clientdict, session['id'] ) ) if 'creds' not in session: session['creds'] = {} creds = session['creds'] # check auth type currently being presented errordict = {} if 'type' in authdict: login_type = authdict['type'] if login_type not in self.checkers: raise LoginError(400, "", Codes.UNRECOGNIZED) try: result = yield self.checkers[login_type](authdict, clientip) if result: creds[login_type] = result self._save_session(session) except LoginError, e: if login_type == LoginType.EMAIL_IDENTITY: # riot used to have a bug where it would request a new # validation token (thus sending a new email) each time it # got a 401 with a 'flows' field. # (https://github.com/vector-im/vector-web/issues/2447). # # Grandfather in the old behaviour for now to avoid # breaking old riot deployments. raise e # this step failed. Merge the error dict into the response # so that the client can have another go. errordict = e.error_dict() for f in flows: if len(set(f) - set(creds.keys())) == 0: # it's very useful to know what args are stored, but this can # include the password in the case of registering, so only log # the keys (confusingly, clientdict may contain a password # param, creds is just what the user authed as for UI auth # and is not sensitive). logger.info( "Auth completed with creds: %r. Client dict has keys: %r", creds, clientdict.keys() ) defer.returnValue((True, creds, clientdict, session['id'])) ret = self._auth_dict_for_flows(flows, session) ret['completed'] = creds.keys() ret.update(errordict) defer.returnValue((False, ret, clientdict, session['id'])) @defer.inlineCallbacks def add_oob_auth(self, stagetype, authdict, clientip): """ Adds the result of out-of-band authentication into an existing auth session. Currently used for adding the result of fallback auth. """ if stagetype not in self.checkers: raise LoginError(400, "", Codes.MISSING_PARAM) if 'session' not in authdict: raise LoginError(400, "", Codes.MISSING_PARAM) sess = self._get_session_info( authdict['session'] ) if 'creds' not in sess: sess['creds'] = {} creds = sess['creds'] result = yield self.checkers[stagetype](authdict, clientip) if result: creds[stagetype] = result self._save_session(sess) defer.returnValue(True) defer.returnValue(False) def get_session_id(self, clientdict): """ Gets the session ID for a client given the client dictionary Args: clientdict: The dictionary sent by the client in the request Returns: str|None: The string session ID the client sent. If the client did not send a session ID, returns None. """ sid = None if clientdict and 'auth' in clientdict: authdict = clientdict['auth'] if 'session' in authdict: sid = authdict['session'] return sid def set_session_data(self, session_id, key, value): """ Store a key-value pair into the sessions data associated with this request. This data is stored server-side and cannot be modified by the client. Args: session_id (string): The ID of this session as returned from check_auth key (string): The key to store the data under value (any): The data to store """ sess = self._get_session_info(session_id) sess.setdefault('serverdict', {})[key] = value self._save_session(sess) def get_session_data(self, session_id, key, default=None): """ Retrieve data stored with set_session_data Args: session_id (string): The ID of this session as returned from check_auth key (string): The key to store the data under default (any): Value to return if the key has not been set """ sess = self._get_session_info(session_id) return sess.setdefault('serverdict', {}).get(key, default) def _check_password_auth(self, authdict, _): if "user" not in authdict or "password" not in authdict: raise LoginError(400, "", Codes.MISSING_PARAM) user_id = authdict["user"] password = authdict["password"] if not user_id.startswith('@'): user_id = UserID.create(user_id, self.hs.hostname).to_string() return self._check_password(user_id, password) @defer.inlineCallbacks def _check_recaptcha(self, authdict, clientip): try: user_response = authdict["response"] except KeyError: # Client tried to provide captcha but didn't give the parameter: # bad request. raise LoginError( 400, "Captcha response is required", errcode=Codes.CAPTCHA_NEEDED ) logger.info( "Submitting recaptcha response %s with remoteip %s", user_response, clientip ) # TODO: get this from the homeserver rather than creating a new one for # each request try: client = self.hs.get_simple_http_client() resp_body = yield client.post_urlencoded_get_json( self.hs.config.recaptcha_siteverify_api, args={ 'secret': self.hs.config.recaptcha_private_key, 'response': user_response, 'remoteip': clientip, } ) except PartialDownloadError as pde: # Twisted is silly data = pde.response resp_body = simplejson.loads(data) if 'success' in resp_body: # Note that we do NOT check the hostname here: we explicitly # intend the CAPTCHA to be presented by whatever client the # user is using, we just care that they have completed a CAPTCHA. logger.info( "%s reCAPTCHA from hostname %s", "Successful" if resp_body['success'] else "Failed", resp_body.get('hostname') ) if resp_body['success']: defer.returnValue(True) raise LoginError(401, "", errcode=Codes.UNAUTHORIZED) def _check_email_identity(self, authdict, _): return self._check_threepid('email', authdict) def _check_msisdn(self, authdict, _): return self._check_threepid('msisdn', authdict) @defer.inlineCallbacks def _check_dummy_auth(self, authdict, _): yield run_on_reactor() defer.returnValue(True) @defer.inlineCallbacks def _check_threepid(self, medium, authdict): yield run_on_reactor() if 'threepid_creds' not in authdict: raise LoginError(400, "Missing threepid_creds", Codes.MISSING_PARAM) threepid_creds = authdict['threepid_creds'] identity_handler = self.hs.get_handlers().identity_handler logger.info("Getting validated threepid. threepidcreds: %r", (threepid_creds,)) threepid = yield identity_handler.threepid_from_creds(threepid_creds) if not threepid: raise LoginError(401, "", errcode=Codes.UNAUTHORIZED) if threepid['medium'] != medium: raise LoginError( 401, "Expecting threepid of type '%s', got '%s'" % ( medium, threepid['medium'], ), errcode=Codes.UNAUTHORIZED ) threepid['threepid_creds'] = authdict['threepid_creds'] defer.returnValue(threepid) def _get_params_recaptcha(self): return {"public_key": self.hs.config.recaptcha_public_key} def _auth_dict_for_flows(self, flows, session): public_flows = [] for f in flows: public_flows.append(f) get_params = { LoginType.RECAPTCHA: self._get_params_recaptcha, } params = {} for f in public_flows: for stage in f: if stage in get_params and stage not in params: params[stage] = get_params[stage]() return { "session": session['id'], "flows": [{"stages": f} for f in public_flows], "params": params } def _get_session_info(self, session_id): if session_id not in self.sessions: session_id = None if not session_id: # create a new session while session_id is None or session_id in self.sessions: session_id = stringutils.random_string(24) self.sessions[session_id] = { "id": session_id, } return self.sessions[session_id] def validate_password_login(self, user_id, password): """ Authenticates the user with their username and password. Used only by the v1 login API. Args: user_id (str): complete @user:id password (str): Password Returns: defer.Deferred: (str) canonical user id Raises: StoreError if there was a problem accessing the database LoginError if there was an authentication problem. """ return self._check_password(user_id, password) @defer.inlineCallbacks def get_access_token_for_user_id(self, user_id, device_id=None, initial_display_name=None): """ Creates a new access token for the user with the given user ID. The user is assumed to have been authenticated by some other machanism (e.g. CAS), and the user_id converted to the canonical case. The device will be recorded in the table if it is not there already. Args: user_id (str): canonical User ID device_id (str|None): the device ID to associate with the tokens. None to leave the tokens unassociated with a device (deprecated: we should always have a device ID) initial_display_name (str): display name to associate with the device if it needs re-registering Returns: The access token for the user's session. Raises: StoreError if there was a problem storing the token. LoginError if there was an authentication problem. """ logger.info("Logging in user %s on device %s", user_id, device_id) access_token = yield self.issue_access_token(user_id, device_id) # the device *should* have been registered before we got here; however, # it's possible we raced against a DELETE operation. The thing we # really don't want is active access_tokens without a record of the # device, so we double-check it here. if device_id is not None: yield self.device_handler.check_device_registered( user_id, device_id, initial_display_name ) defer.returnValue(access_token) @defer.inlineCallbacks def check_user_exists(self, user_id): """ Checks to see if a user with the given id exists. Will check case insensitively, but return None if there are multiple inexact matches. Args: (str) user_id: complete @user:id Returns: defer.Deferred: (str) canonical_user_id, or None if zero or multiple matches """ res = yield self._find_user_id_and_pwd_hash(user_id) if res is not None: defer.returnValue(res[0]) defer.returnValue(None) @defer.inlineCallbacks def _find_user_id_and_pwd_hash(self, user_id): """Checks to see if a user with the given id exists. Will check case insensitively, but will return None if there are multiple inexact matches. Returns: tuple: A 2-tuple of `(canonical_user_id, password_hash)` None: if there is not exactly one match """ user_infos = yield self.store.get_users_by_id_case_insensitive(user_id) result = None if not user_infos: logger.warn("Attempted to login as %s but they do not exist", user_id) elif len(user_infos) == 1: # a single match (possibly not exact) result = user_infos.popitem() elif user_id in user_infos: # multiple matches, but one is exact result = (user_id, user_infos[user_id]) else: # multiple matches, none of them exact logger.warn( "Attempted to login as %s but it matches more than one user " "inexactly: %r", user_id, user_infos.keys() ) defer.returnValue(result) @defer.inlineCallbacks def _check_password(self, user_id, password): """Authenticate a user against the LDAP and local databases. user_id is checked case insensitively against the local database, but will throw if there are multiple inexact matches. Args: user_id (str): complete @user:id Returns: (str) the canonical_user_id Raises: LoginError if login fails """ for provider in self.password_providers: is_valid = yield provider.check_password(user_id, password) if is_valid: defer.returnValue(user_id) canonical_user_id = yield self._check_local_password(user_id, password) if canonical_user_id: defer.returnValue(canonical_user_id) # unknown username or invalid password. We raise a 403 here, but note # that if we're doing user-interactive login, it turns all LoginErrors # into a 401 anyway. raise LoginError( 403, "Invalid password", errcode=Codes.FORBIDDEN ) @defer.inlineCallbacks def _check_local_password(self, user_id, password): """Authenticate a user against the local password database. user_id is checked case insensitively, but will return None if there are multiple inexact matches. Args: user_id (str): complete @user:id Returns: (str) the canonical_user_id, or None if unknown user / bad password """ lookupres = yield self._find_user_id_and_pwd_hash(user_id) if not lookupres: defer.returnValue(None) (user_id, password_hash) = lookupres result = self.validate_hash(password, password_hash) if not result: logger.warn("Failed password login for user %s", user_id) defer.returnValue(None) defer.returnValue(user_id) @defer.inlineCallbacks def issue_access_token(self, user_id, device_id=None): access_token = self.macaroon_gen.generate_access_token(user_id) yield self.store.add_access_token_to_user(user_id, access_token, device_id) defer.returnValue(access_token) def validate_short_term_login_token_and_get_user_id(self, login_token): auth_api = self.hs.get_auth() try: macaroon = pymacaroons.Macaroon.deserialize(login_token) user_id = auth_api.get_user_id_from_macaroon(macaroon) auth_api.validate_macaroon(macaroon, "login", True, user_id) return user_id except Exception: raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN) @defer.inlineCallbacks def set_password(self, user_id, newpassword, requester=None): password_hash = self.hash(newpassword) except_access_token_id = requester.access_token_id if requester else None try: yield self.store.user_set_password_hash(user_id, password_hash) except StoreError as e: if e.code == 404: raise SynapseError(404, "Unknown user", Codes.NOT_FOUND) raise e yield self.store.user_delete_access_tokens( user_id, except_access_token_id ) yield self.hs.get_pusherpool().remove_pushers_by_user( user_id, except_access_token_id ) @defer.inlineCallbacks def add_threepid(self, user_id, medium, address, validated_at): # 'Canonicalise' email addresses down to lower case. # We've now moving towards the Home Server being the entity that # is responsible for validating threepids used for resetting passwords # on accounts, so in future Synapse will gain knowledge of specific # types (mediums) of threepid. For now, we still use the existing # infrastructure, but this is the start of synapse gaining knowledge # of specific types of threepid (and fixes the fact that checking # for the presence of an email address during password reset was # case sensitive). if medium == 'email': address = address.lower() yield self.store.user_add_threepid( user_id, medium, address, validated_at, self.hs.get_clock().time_msec() ) @defer.inlineCallbacks def delete_threepid(self, user_id, medium, address): # 'Canonicalise' email addresses as per above if medium == 'email': address = address.lower() ret = yield self.store.user_delete_threepid( user_id, medium, address, ) defer.returnValue(ret) def _save_session(self, session): # TODO: Persistent storage logger.debug("Saving session %s", session) session["last_used"] = self.hs.get_clock().time_msec() self.sessions[session["id"]] = session def hash(self, password): """Computes a secure hash of password. Args: password (str): Password to hash. Returns: Hashed password (str). """ return bcrypt.hashpw(password.encode('utf8') + self.hs.config.password_pepper, bcrypt.gensalt(self.bcrypt_rounds)) def validate_hash(self, password, stored_hash): """Validates that self.hash(password) == stored_hash. Args: password (str): Password to hash. stored_hash (str): Expected hash value. Returns: Whether self.hash(password) == stored_hash (bool). """ if stored_hash: return bcrypt.hashpw(password.encode('utf8') + self.hs.config.password_pepper, stored_hash.encode('utf8')) == stored_hash else: return False class MacaroonGeneartor(object): def __init__(self, hs): self.clock = hs.get_clock() self.server_name = hs.config.server_name self.macaroon_secret_key = hs.config.macaroon_secret_key def generate_access_token(self, user_id, extra_caveats=None): extra_caveats = extra_caveats or [] macaroon = self._generate_base_macaroon(user_id) macaroon.add_first_party_caveat("type = access") # Include a nonce, to make sure that each login gets a different # access token. macaroon.add_first_party_caveat("nonce = %s" % ( stringutils.random_string_with_symbols(16), )) for caveat in extra_caveats: macaroon.add_first_party_caveat(caveat) return macaroon.serialize() def generate_short_term_login_token(self, user_id, duration_in_ms=(2 * 60 * 1000)): macaroon = self._generate_base_macaroon(user_id) macaroon.add_first_party_caveat("type = login") now = self.clock.time_msec() expiry = now + duration_in_ms macaroon.add_first_party_caveat("time < %d" % (expiry,)) return macaroon.serialize() def generate_delete_pusher_token(self, user_id): macaroon = self._generate_base_macaroon(user_id) macaroon.add_first_party_caveat("type = delete_pusher") return macaroon.serialize() def _generate_base_macaroon(self, user_id): macaroon = pymacaroons.Macaroon( location=self.server_name, identifier="key", key=self.macaroon_secret_key) macaroon.add_first_party_caveat("gen = 1") macaroon.add_first_party_caveat("user_id = %s" % (user_id,)) return macaroon class _AccountHandler(object): """A proxy object that gets passed to password auth providers so they can register new users etc if necessary. """ def __init__(self, hs, check_user_exists): self.hs = hs self._check_user_exists = check_user_exists def check_user_exists(self, user_id): """Check if user exissts. Returns: Deferred(bool) """ return self._check_user_exists(user_id) def register(self, localpart): """Registers a new user with given localpart Returns: Deferred: a 2-tuple of (user_id, access_token) """ reg = self.hs.get_handlers().registration_handler return reg.register(localpart=localpart) synapse-0.24.0/synapse/handlers/device.py000066400000000000000000000503521317335640100203620ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.api import errors from synapse.api.constants import EventTypes from synapse.util import stringutils from synapse.util.async import Linearizer from synapse.util.caches.expiringcache import ExpiringCache from synapse.util.retryutils import NotRetryingDestination from synapse.util.metrics import measure_func from synapse.types import get_domain_from_id, RoomStreamToken from twisted.internet import defer from ._base import BaseHandler import logging logger = logging.getLogger(__name__) class DeviceHandler(BaseHandler): def __init__(self, hs): super(DeviceHandler, self).__init__(hs) self.hs = hs self.state = hs.get_state_handler() self.federation_sender = hs.get_federation_sender() self.federation = hs.get_replication_layer() self._edu_updater = DeviceListEduUpdater(hs, self) self.federation.register_edu_handler( "m.device_list_update", self._edu_updater.incoming_device_list_update, ) self.federation.register_query_handler( "user_devices", self.on_federation_query_user_devices, ) hs.get_distributor().observe("user_left_room", self.user_left_room) @defer.inlineCallbacks def check_device_registered(self, user_id, device_id, initial_device_display_name=None): """ If the given device has not been registered, register it with the supplied display name. If no device_id is supplied, we make one up. Args: user_id (str): @user:id device_id (str | None): device id supplied by client initial_device_display_name (str | None): device display name from client Returns: str: device id (generated if none was supplied) """ if device_id is not None: new_device = yield self.store.store_device( user_id=user_id, device_id=device_id, initial_device_display_name=initial_device_display_name, ) if new_device: yield self.notify_device_update(user_id, [device_id]) defer.returnValue(device_id) # if the device id is not specified, we'll autogen one, but loop a few # times in case of a clash. attempts = 0 while attempts < 5: device_id = stringutils.random_string(10).upper() new_device = yield self.store.store_device( user_id=user_id, device_id=device_id, initial_device_display_name=initial_device_display_name, ) if new_device: yield self.notify_device_update(user_id, [device_id]) defer.returnValue(device_id) attempts += 1 raise errors.StoreError(500, "Couldn't generate a device ID.") @defer.inlineCallbacks def get_devices_by_user(self, user_id): """ Retrieve the given user's devices Args: user_id (str): Returns: defer.Deferred: list[dict[str, X]]: info on each device """ device_map = yield self.store.get_devices_by_user(user_id) ips = yield self.store.get_last_client_ip_by_device( user_id, device_id=None ) devices = device_map.values() for device in devices: _update_device_from_client_ips(device, ips) defer.returnValue(devices) @defer.inlineCallbacks def get_device(self, user_id, device_id): """ Retrieve the given device Args: user_id (str): device_id (str): Returns: defer.Deferred: dict[str, X]: info on the device Raises: errors.NotFoundError: if the device was not found """ try: device = yield self.store.get_device(user_id, device_id) except errors.StoreError: raise errors.NotFoundError ips = yield self.store.get_last_client_ip_by_device( user_id, device_id, ) _update_device_from_client_ips(device, ips) defer.returnValue(device) @defer.inlineCallbacks def delete_device(self, user_id, device_id): """ Delete the given device Args: user_id (str): device_id (str): Returns: defer.Deferred: """ try: yield self.store.delete_device(user_id, device_id) except errors.StoreError, e: if e.code == 404: # no match pass else: raise yield self.store.user_delete_access_tokens( user_id, device_id=device_id, delete_refresh_tokens=True, ) yield self.store.delete_e2e_keys_by_device( user_id=user_id, device_id=device_id ) yield self.notify_device_update(user_id, [device_id]) @defer.inlineCallbacks def delete_devices(self, user_id, device_ids): """ Delete several devices Args: user_id (str): device_ids (str): The list of device IDs to delete Returns: defer.Deferred: """ try: yield self.store.delete_devices(user_id, device_ids) except errors.StoreError, e: if e.code == 404: # no match pass else: raise # Delete access tokens and e2e keys for each device. Not optimised as it is not # considered as part of a critical path. for device_id in device_ids: yield self.store.user_delete_access_tokens( user_id, device_id=device_id, delete_refresh_tokens=True, ) yield self.store.delete_e2e_keys_by_device( user_id=user_id, device_id=device_id ) yield self.notify_device_update(user_id, device_ids) @defer.inlineCallbacks def update_device(self, user_id, device_id, content): """ Update the given device Args: user_id (str): device_id (str): content (dict): body of update request Returns: defer.Deferred: """ try: yield self.store.update_device( user_id, device_id, new_display_name=content.get("display_name") ) yield self.notify_device_update(user_id, [device_id]) except errors.StoreError, e: if e.code == 404: raise errors.NotFoundError() else: raise @measure_func("notify_device_update") @defer.inlineCallbacks def notify_device_update(self, user_id, device_ids): """Notify that a user's device(s) has changed. Pokes the notifier, and remote servers if the user is local. """ users_who_share_room = yield self.store.get_users_who_share_room_with_user( user_id ) hosts = set() if self.hs.is_mine_id(user_id): hosts.update(get_domain_from_id(u) for u in users_who_share_room) hosts.discard(self.server_name) position = yield self.store.add_device_change_to_streams( user_id, device_ids, list(hosts) ) room_ids = yield self.store.get_rooms_for_user(user_id) yield self.notifier.on_new_event( "device_list_key", position, rooms=room_ids, ) if hosts: logger.info("Sending device list update notif to: %r", hosts) for host in hosts: self.federation_sender.send_device_messages(host) @measure_func("device.get_user_ids_changed") @defer.inlineCallbacks def get_user_ids_changed(self, user_id, from_token): """Get list of users that have had the devices updated, or have newly joined a room, that `user_id` may be interested in. Args: user_id (str) from_token (StreamToken) """ now_token = yield self.hs.get_event_sources().get_current_token() room_ids = yield self.store.get_rooms_for_user(user_id) # First we check if any devices have changed changed = yield self.store.get_user_whose_devices_changed( from_token.device_list_key ) # Then work out if any users have since joined rooms_changed = self.store.get_rooms_that_changed(room_ids, from_token.room_key) member_events = yield self.store.get_membership_changes_for_user( user_id, from_token.room_key, now_token.room_key ) rooms_changed.update(event.room_id for event in member_events) stream_ordering = RoomStreamToken.parse_stream_token( from_token.room_key ).stream possibly_changed = set(changed) possibly_left = set() for room_id in rooms_changed: current_state_ids = yield self.store.get_current_state_ids(room_id) # The user may have left the room # TODO: Check if they actually did or if we were just invited. if room_id not in room_ids: for key, event_id in current_state_ids.iteritems(): etype, state_key = key if etype != EventTypes.Member: continue possibly_left.add(state_key) continue # Fetch the current state at the time. try: event_ids = yield self.store.get_forward_extremeties_for_room( room_id, stream_ordering=stream_ordering ) except errors.StoreError: # we have purged the stream_ordering index since the stream # ordering: treat it the same as a new room event_ids = [] # special-case for an empty prev state: include all members # in the changed list if not event_ids: for key, event_id in current_state_ids.iteritems(): etype, state_key = key if etype != EventTypes.Member: continue possibly_changed.add(state_key) continue current_member_id = current_state_ids.get((EventTypes.Member, user_id)) if not current_member_id: continue # mapping from event_id -> state_dict prev_state_ids = yield self.store.get_state_ids_for_events(event_ids) # Check if we've joined the room? If so we just blindly add all the users to # the "possibly changed" users. for state_dict in prev_state_ids.itervalues(): member_event = state_dict.get((EventTypes.Member, user_id), None) if not member_event or member_event != current_member_id: for key, event_id in current_state_ids.iteritems(): etype, state_key = key if etype != EventTypes.Member: continue possibly_changed.add(state_key) break # If there has been any change in membership, include them in the # possibly changed list. We'll check if they are joined below, # and we're not toooo worried about spuriously adding users. for key, event_id in current_state_ids.iteritems(): etype, state_key = key if etype != EventTypes.Member: continue # check if this member has changed since any of the extremities # at the stream_ordering, and add them to the list if so. for state_dict in prev_state_ids.itervalues(): prev_event_id = state_dict.get(key, None) if not prev_event_id or prev_event_id != event_id: if state_key != user_id: possibly_changed.add(state_key) break if possibly_changed or possibly_left: users_who_share_room = yield self.store.get_users_who_share_room_with_user( user_id ) # Take the intersection of the users whose devices may have changed # and those that actually still share a room with the user possibly_joined = possibly_changed & users_who_share_room possibly_left = (possibly_changed | possibly_left) - users_who_share_room else: possibly_joined = [] possibly_left = [] defer.returnValue({ "changed": list(possibly_joined), "left": list(possibly_left), }) @defer.inlineCallbacks def on_federation_query_user_devices(self, user_id): stream_id, devices = yield self.store.get_devices_with_keys_by_user(user_id) defer.returnValue({ "user_id": user_id, "stream_id": stream_id, "devices": devices, }) @defer.inlineCallbacks def user_left_room(self, user, room_id): user_id = user.to_string() room_ids = yield self.store.get_rooms_for_user(user_id) if not room_ids: # We no longer share rooms with this user, so we'll no longer # receive device updates. Mark this in DB. yield self.store.mark_remote_user_device_list_as_unsubscribed(user_id) def _update_device_from_client_ips(device, client_ips): ip = client_ips.get((device["user_id"], device["device_id"]), {}) device.update({ "last_seen_ts": ip.get("last_seen"), "last_seen_ip": ip.get("ip"), }) class DeviceListEduUpdater(object): "Handles incoming device list updates from federation and updates the DB" def __init__(self, hs, device_handler): self.store = hs.get_datastore() self.federation = hs.get_replication_layer() self.clock = hs.get_clock() self.device_handler = device_handler self._remote_edu_linearizer = Linearizer(name="remote_device_list") # user_id -> list of updates waiting to be handled. self._pending_updates = {} # Recently seen stream ids. We don't bother keeping these in the DB, # but they're useful to have them about to reduce the number of spurious # resyncs. self._seen_updates = ExpiringCache( cache_name="device_update_edu", clock=self.clock, max_len=10000, expiry_ms=30 * 60 * 1000, iterable=True, ) @defer.inlineCallbacks def incoming_device_list_update(self, origin, edu_content): """Called on incoming device list update from federation. Responsible for parsing the EDU and adding to pending updates list. """ user_id = edu_content.pop("user_id") device_id = edu_content.pop("device_id") stream_id = str(edu_content.pop("stream_id")) # They may come as ints prev_ids = edu_content.pop("prev_id", []) prev_ids = [str(p) for p in prev_ids] # They may come as ints if get_domain_from_id(user_id) != origin: # TODO: Raise? logger.warning("Got device list update edu for %r from %r", user_id, origin) return room_ids = yield self.store.get_rooms_for_user(user_id) if not room_ids: # We don't share any rooms with this user. Ignore update, as we # probably won't get any further updates. return self._pending_updates.setdefault(user_id, []).append( (device_id, stream_id, prev_ids, edu_content) ) yield self._handle_device_updates(user_id) @measure_func("_incoming_device_list_update") @defer.inlineCallbacks def _handle_device_updates(self, user_id): "Actually handle pending updates." with (yield self._remote_edu_linearizer.queue(user_id)): pending_updates = self._pending_updates.pop(user_id, []) if not pending_updates: # This can happen since we batch updates return # Given a list of updates we check if we need to resync. This # happens if we've missed updates. resync = yield self._need_to_do_resync(user_id, pending_updates) if resync: # Fetch all devices for the user. origin = get_domain_from_id(user_id) try: result = yield self.federation.query_user_devices(origin, user_id) except NotRetryingDestination: # TODO: Remember that we are now out of sync and try again # later logger.warn( "Failed to handle device list update for %s," " we're not retrying the remote", user_id, ) # We abort on exceptions rather than accepting the update # as otherwise synapse will 'forget' that its device list # is out of date. If we bail then we will retry the resync # next time we get a device list update for this user_id. # This makes it more likely that the device lists will # eventually become consistent. return except Exception: # TODO: Remember that we are now out of sync and try again # later logger.exception( "Failed to handle device list update for %s", user_id ) return stream_id = result["stream_id"] devices = result["devices"] yield self.store.update_remote_device_list_cache( user_id, devices, stream_id, ) device_ids = [device["device_id"] for device in devices] yield self.device_handler.notify_device_update(user_id, device_ids) else: # Simply update the single device, since we know that is the only # change (becuase of the single prev_id matching the current cache) for device_id, stream_id, prev_ids, content in pending_updates: yield self.store.update_remote_device_list_cache_entry( user_id, device_id, content, stream_id, ) yield self.device_handler.notify_device_update( user_id, [device_id for device_id, _, _, _ in pending_updates] ) self._seen_updates.setdefault(user_id, set()).update( stream_id for _, stream_id, _, _ in pending_updates ) @defer.inlineCallbacks def _need_to_do_resync(self, user_id, updates): """Given a list of updates for a user figure out if we need to do a full resync, or whether we have enough data that we can just apply the delta. """ seen_updates = self._seen_updates.get(user_id, set()) extremity = yield self.store.get_device_list_last_stream_id_for_remote( user_id ) stream_id_in_updates = set() # stream_ids in updates list for _, stream_id, prev_ids, _ in updates: if not prev_ids: # We always do a resync if there are no previous IDs defer.returnValue(True) for prev_id in prev_ids: if prev_id == extremity: continue elif prev_id in seen_updates: continue elif prev_id in stream_id_in_updates: continue else: defer.returnValue(True) stream_id_in_updates.add(stream_id) defer.returnValue(False) synapse-0.24.0/synapse/handlers/devicemessage.py000066400000000000000000000077771317335640100217440ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from twisted.internet import defer from synapse.types import get_domain_from_id from synapse.util.stringutils import random_string logger = logging.getLogger(__name__) class DeviceMessageHandler(object): def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): server """ self.store = hs.get_datastore() self.notifier = hs.get_notifier() self.is_mine_id = hs.is_mine_id self.federation = hs.get_federation_sender() hs.get_replication_layer().register_edu_handler( "m.direct_to_device", self.on_direct_to_device_edu ) @defer.inlineCallbacks def on_direct_to_device_edu(self, origin, content): local_messages = {} sender_user_id = content["sender"] if origin != get_domain_from_id(sender_user_id): logger.warn( "Dropping device message from %r with spoofed sender %r", origin, sender_user_id ) message_type = content["type"] message_id = content["message_id"] for user_id, by_device in content["messages"].items(): messages_by_device = { device_id: { "content": message_content, "type": message_type, "sender": sender_user_id, } for device_id, message_content in by_device.items() } if messages_by_device: local_messages[user_id] = messages_by_device stream_id = yield self.store.add_messages_from_remote_to_device_inbox( origin, message_id, local_messages ) self.notifier.on_new_event( "to_device_key", stream_id, users=local_messages.keys() ) @defer.inlineCallbacks def send_device_message(self, sender_user_id, message_type, messages): local_messages = {} remote_messages = {} for user_id, by_device in messages.items(): if self.is_mine_id(user_id): messages_by_device = { device_id: { "content": message_content, "type": message_type, "sender": sender_user_id, } for device_id, message_content in by_device.items() } if messages_by_device: local_messages[user_id] = messages_by_device else: destination = get_domain_from_id(user_id) remote_messages.setdefault(destination, {})[user_id] = by_device message_id = random_string(16) remote_edu_contents = {} for destination, messages in remote_messages.items(): remote_edu_contents[destination] = { "messages": messages, "sender": sender_user_id, "type": message_type, "message_id": message_id, } stream_id = yield self.store.add_messages_to_device_inbox( local_messages, remote_edu_contents ) self.notifier.on_new_event( "to_device_key", stream_id, users=local_messages.keys() ) for destination in remote_messages.keys(): # Enqueue a new federation transaction to send the new # device messages to each remote destination. self.federation.send_device_messages(destination) synapse-0.24.0/synapse/handlers/directory.py000066400000000000000000000316301317335640100211250ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from ._base import BaseHandler from synapse.api.errors import SynapseError, Codes, CodeMessageException, AuthError from synapse.api.constants import EventTypes from synapse.types import RoomAlias, UserID, get_domain_from_id import logging import string logger = logging.getLogger(__name__) class DirectoryHandler(BaseHandler): def __init__(self, hs): super(DirectoryHandler, self).__init__(hs) self.state = hs.get_state_handler() self.appservice_handler = hs.get_application_service_handler() self.federation = hs.get_replication_layer() self.federation.register_query_handler( "directory", self.on_directory_query ) self.spam_checker = hs.get_spam_checker() @defer.inlineCallbacks def _create_association(self, room_alias, room_id, servers=None, creator=None): # general association creation for both human users and app services for wchar in string.whitespace: if wchar in room_alias.localpart: raise SynapseError(400, "Invalid characters in room alias") if not self.hs.is_mine(room_alias): raise SynapseError(400, "Room alias must be local") # TODO(erikj): Change this. # TODO(erikj): Add transactions. # TODO(erikj): Check if there is a current association. if not servers: users = yield self.state.get_current_user_in_room(room_id) servers = set(get_domain_from_id(u) for u in users) if not servers: raise SynapseError(400, "Failed to get server list") yield self.store.create_room_alias_association( room_alias, room_id, servers, creator=creator, ) @defer.inlineCallbacks def create_association(self, user_id, room_alias, room_id, servers=None): # association creation for human users # TODO(erikj): Do user auth. if not self.spam_checker.user_may_create_room_alias(user_id, room_alias): raise SynapseError( 403, "This user is not permitted to create this alias", ) can_create = yield self.can_modify_alias( room_alias, user_id=user_id ) if not can_create: raise SynapseError( 400, "This alias is reserved by an application service.", errcode=Codes.EXCLUSIVE ) yield self._create_association(room_alias, room_id, servers, creator=user_id) @defer.inlineCallbacks def create_appservice_association(self, service, room_alias, room_id, servers=None): if not service.is_interested_in_alias(room_alias.to_string()): raise SynapseError( 400, "This application service has not reserved" " this kind of alias.", errcode=Codes.EXCLUSIVE ) # association creation for app services yield self._create_association(room_alias, room_id, servers) @defer.inlineCallbacks def delete_association(self, requester, user_id, room_alias): # association deletion for human users can_delete = yield self._user_can_delete_alias(room_alias, user_id) if not can_delete: raise AuthError( 403, "You don't have permission to delete the alias.", ) can_delete = yield self.can_modify_alias( room_alias, user_id=user_id ) if not can_delete: raise SynapseError( 400, "This alias is reserved by an application service.", errcode=Codes.EXCLUSIVE ) room_id = yield self._delete_association(room_alias) try: yield self.send_room_alias_update_event( requester, requester.user.to_string(), room_id ) yield self._update_canonical_alias( requester, requester.user.to_string(), room_id, room_alias, ) except AuthError as e: logger.info("Failed to update alias events: %s", e) defer.returnValue(room_id) @defer.inlineCallbacks def delete_appservice_association(self, service, room_alias): if not service.is_interested_in_alias(room_alias.to_string()): raise SynapseError( 400, "This application service has not reserved this kind of alias", errcode=Codes.EXCLUSIVE ) yield self._delete_association(room_alias) @defer.inlineCallbacks def _delete_association(self, room_alias): if not self.hs.is_mine(room_alias): raise SynapseError(400, "Room alias must be local") room_id = yield self.store.delete_room_alias(room_alias) defer.returnValue(room_id) @defer.inlineCallbacks def get_association(self, room_alias): room_id = None if self.hs.is_mine(room_alias): result = yield self.get_association_from_room_alias( room_alias ) if result: room_id = result.room_id servers = result.servers else: try: result = yield self.federation.make_query( destination=room_alias.domain, query_type="directory", args={ "room_alias": room_alias.to_string(), }, retry_on_dns_fail=False, ignore_backoff=True, ) except CodeMessageException as e: logging.warn("Error retrieving alias") if e.code == 404: result = None else: raise if result and "room_id" in result and "servers" in result: room_id = result["room_id"] servers = result["servers"] if not room_id: raise SynapseError( 404, "Room alias %s not found" % (room_alias.to_string(),), Codes.NOT_FOUND ) users = yield self.state.get_current_user_in_room(room_id) extra_servers = set(get_domain_from_id(u) for u in users) servers = set(extra_servers) | set(servers) # If this server is in the list of servers, return it first. if self.server_name in servers: servers = ( [self.server_name] + [s for s in servers if s != self.server_name] ) else: servers = list(servers) defer.returnValue({ "room_id": room_id, "servers": servers, }) return @defer.inlineCallbacks def on_directory_query(self, args): room_alias = RoomAlias.from_string(args["room_alias"]) if not self.hs.is_mine(room_alias): raise SynapseError( 400, "Room Alias is not hosted on this Home Server" ) result = yield self.get_association_from_room_alias( room_alias ) if result is not None: defer.returnValue({ "room_id": result.room_id, "servers": result.servers, }) else: raise SynapseError( 404, "Room alias %r not found" % (room_alias.to_string(),), Codes.NOT_FOUND ) @defer.inlineCallbacks def send_room_alias_update_event(self, requester, user_id, room_id): aliases = yield self.store.get_aliases_for_room(room_id) msg_handler = self.hs.get_handlers().message_handler yield msg_handler.create_and_send_nonmember_event( requester, { "type": EventTypes.Aliases, "state_key": self.hs.hostname, "room_id": room_id, "sender": user_id, "content": {"aliases": aliases}, }, ratelimit=False ) @defer.inlineCallbacks def _update_canonical_alias(self, requester, user_id, room_id, room_alias): alias_event = yield self.state.get_current_state( room_id, EventTypes.CanonicalAlias, "" ) alias_str = room_alias.to_string() if not alias_event or alias_event.content.get("alias", "") != alias_str: return msg_handler = self.hs.get_handlers().message_handler yield msg_handler.create_and_send_nonmember_event( requester, { "type": EventTypes.CanonicalAlias, "state_key": "", "room_id": room_id, "sender": user_id, "content": {}, }, ratelimit=False ) @defer.inlineCallbacks def get_association_from_room_alias(self, room_alias): result = yield self.store.get_association_from_room_alias( room_alias ) if not result: # Query AS to see if it exists as_handler = self.appservice_handler result = yield as_handler.query_room_alias_exists(room_alias) defer.returnValue(result) def can_modify_alias(self, alias, user_id=None): # Any application service "interested" in an alias they are regexing on # can modify the alias. # Users can only modify the alias if ALL the interested services have # non-exclusive locks on the alias (or there are no interested services) services = self.store.get_app_services() interested_services = [ s for s in services if s.is_interested_in_alias(alias.to_string()) ] for service in interested_services: if user_id == service.sender: # this user IS the app service so they can do whatever they like return defer.succeed(True) elif service.is_exclusive_alias(alias.to_string()): # another service has an exclusive lock on this alias. return defer.succeed(False) # either no interested services, or no service with an exclusive lock return defer.succeed(True) @defer.inlineCallbacks def _user_can_delete_alias(self, alias, user_id): creator = yield self.store.get_room_alias_creator(alias.to_string()) if creator and creator == user_id: defer.returnValue(True) is_admin = yield self.auth.is_server_admin(UserID.from_string(user_id)) defer.returnValue(is_admin) @defer.inlineCallbacks def edit_published_room_list(self, requester, room_id, visibility): """Edit the entry of the room in the published room list. requester room_id (str) visibility (str): "public" or "private" """ if not self.spam_checker.user_may_publish_room( requester.user.to_string(), room_id ): raise AuthError( 403, "This user is not permitted to publish rooms to the room list" ) if requester.is_guest: raise AuthError(403, "Guests cannot edit the published room list") if visibility not in ["public", "private"]: raise SynapseError(400, "Invalid visibility setting") room = yield self.store.get_room(room_id) if room is None: raise SynapseError(400, "Unknown room") yield self.auth.check_can_change_room_list(room_id, requester.user) yield self.store.set_room_is_public(room_id, visibility == "public") @defer.inlineCallbacks def edit_published_appservice_room_list(self, appservice_id, network_id, room_id, visibility): """Add or remove a room from the appservice/network specific public room list. Args: appservice_id (str): ID of the appservice that owns the list network_id (str): The ID of the network the list is associated with room_id (str) visibility (str): either "public" or "private" """ if visibility not in ["public", "private"]: raise SynapseError(400, "Invalid visibility setting") yield self.store.set_room_is_public_appservice( room_id, appservice_id, network_id, visibility == "public" ) synapse-0.24.0/synapse/handlers/e2e_keys.py000066400000000000000000000336471317335640100206410ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import ujson as json import logging from canonicaljson import encode_canonical_json from twisted.internet import defer from synapse.api.errors import SynapseError, CodeMessageException from synapse.types import get_domain_from_id from synapse.util.logcontext import preserve_fn, make_deferred_yieldable from synapse.util.retryutils import NotRetryingDestination logger = logging.getLogger(__name__) class E2eKeysHandler(object): def __init__(self, hs): self.store = hs.get_datastore() self.federation = hs.get_replication_layer() self.device_handler = hs.get_device_handler() self.is_mine_id = hs.is_mine_id self.clock = hs.get_clock() # doesn't really work as part of the generic query API, because the # query request requires an object POST, but we abuse the # "query handler" interface. self.federation.register_query_handler( "client_keys", self.on_federation_query_client_keys ) @defer.inlineCallbacks def query_devices(self, query_body, timeout): """ Handle a device key query from a client { "device_keys": { "": [""] } } -> { "device_keys": { "": { "": { ... } } } } """ device_keys_query = query_body.get("device_keys", {}) # separate users by domain. # make a map from domain to user_id to device_ids local_query = {} remote_queries = {} for user_id, device_ids in device_keys_query.items(): if self.is_mine_id(user_id): local_query[user_id] = device_ids else: remote_queries[user_id] = device_ids # Firt get local devices. failures = {} results = {} if local_query: local_result = yield self.query_local_devices(local_query) for user_id, keys in local_result.items(): if user_id in local_query: results[user_id] = keys # Now attempt to get any remote devices from our local cache. remote_queries_not_in_cache = {} if remote_queries: query_list = [] for user_id, device_ids in remote_queries.iteritems(): if device_ids: query_list.extend((user_id, device_id) for device_id in device_ids) else: query_list.append((user_id, None)) user_ids_not_in_cache, remote_results = ( yield self.store.get_user_devices_from_cache( query_list ) ) for user_id, devices in remote_results.iteritems(): user_devices = results.setdefault(user_id, {}) for device_id, device in devices.iteritems(): keys = device.get("keys", None) device_display_name = device.get("device_display_name", None) if keys: result = dict(keys) unsigned = result.setdefault("unsigned", {}) if device_display_name: unsigned["device_display_name"] = device_display_name user_devices[device_id] = result for user_id in user_ids_not_in_cache: domain = get_domain_from_id(user_id) r = remote_queries_not_in_cache.setdefault(domain, {}) r[user_id] = remote_queries[user_id] # Now fetch any devices that we don't have in our cache @defer.inlineCallbacks def do_remote_query(destination): destination_query = remote_queries_not_in_cache[destination] try: remote_result = yield self.federation.query_client_keys( destination, {"device_keys": destination_query}, timeout=timeout ) for user_id, keys in remote_result["device_keys"].items(): if user_id in destination_query: results[user_id] = keys except CodeMessageException as e: failures[destination] = { "status": e.code, "message": e.message } except NotRetryingDestination as e: failures[destination] = { "status": 503, "message": "Not ready for retry", } except Exception as e: # include ConnectionRefused and other errors failures[destination] = { "status": 503, "message": e.message } yield make_deferred_yieldable(defer.gatherResults([ preserve_fn(do_remote_query)(destination) for destination in remote_queries_not_in_cache ])) defer.returnValue({ "device_keys": results, "failures": failures, }) @defer.inlineCallbacks def query_local_devices(self, query): """Get E2E device keys for local users Args: query (dict[string, list[string]|None): map from user_id to a list of devices to query (None for all devices) Returns: defer.Deferred: (resolves to dict[string, dict[string, dict]]): map from user_id -> device_id -> device details """ local_query = [] result_dict = {} for user_id, device_ids in query.items(): if not self.is_mine_id(user_id): logger.warning("Request for keys for non-local user %s", user_id) raise SynapseError(400, "Not a user here") if not device_ids: local_query.append((user_id, None)) else: for device_id in device_ids: local_query.append((user_id, device_id)) # make sure that each queried user appears in the result dict result_dict[user_id] = {} results = yield self.store.get_e2e_device_keys(local_query) # Build the result structure, un-jsonify the results, and add the # "unsigned" section for user_id, device_keys in results.items(): for device_id, device_info in device_keys.items(): r = dict(device_info["keys"]) r["unsigned"] = {} display_name = device_info["device_display_name"] if display_name is not None: r["unsigned"]["device_display_name"] = display_name result_dict[user_id][device_id] = r defer.returnValue(result_dict) @defer.inlineCallbacks def on_federation_query_client_keys(self, query_body): """ Handle a device key query from a federated server """ device_keys_query = query_body.get("device_keys", {}) res = yield self.query_local_devices(device_keys_query) defer.returnValue({"device_keys": res}) @defer.inlineCallbacks def claim_one_time_keys(self, query, timeout): local_query = [] remote_queries = {} for user_id, device_keys in query.get("one_time_keys", {}).items(): if self.is_mine_id(user_id): for device_id, algorithm in device_keys.items(): local_query.append((user_id, device_id, algorithm)) else: domain = get_domain_from_id(user_id) remote_queries.setdefault(domain, {})[user_id] = device_keys results = yield self.store.claim_e2e_one_time_keys(local_query) json_result = {} failures = {} for user_id, device_keys in results.items(): for device_id, keys in device_keys.items(): for key_id, json_bytes in keys.items(): json_result.setdefault(user_id, {})[device_id] = { key_id: json.loads(json_bytes) } @defer.inlineCallbacks def claim_client_keys(destination): device_keys = remote_queries[destination] try: remote_result = yield self.federation.claim_client_keys( destination, {"one_time_keys": device_keys}, timeout=timeout ) for user_id, keys in remote_result["one_time_keys"].items(): if user_id in device_keys: json_result[user_id] = keys except CodeMessageException as e: failures[destination] = { "status": e.code, "message": e.message } except NotRetryingDestination as e: failures[destination] = { "status": 503, "message": "Not ready for retry", } except Exception as e: # include ConnectionRefused and other errors failures[destination] = { "status": 503, "message": e.message } yield make_deferred_yieldable(defer.gatherResults([ preserve_fn(claim_client_keys)(destination) for destination in remote_queries ])) logger.info( "Claimed one-time-keys: %s", ",".join(( "%s for %s:%s" % (key_id, user_id, device_id) for user_id, user_keys in json_result.iteritems() for device_id, device_keys in user_keys.iteritems() for key_id, _ in device_keys.iteritems() )), ) defer.returnValue({ "one_time_keys": json_result, "failures": failures }) @defer.inlineCallbacks def upload_keys_for_user(self, user_id, device_id, keys): time_now = self.clock.time_msec() # TODO: Validate the JSON to make sure it has the right keys. device_keys = keys.get("device_keys", None) if device_keys: logger.info( "Updating device_keys for device %r for user %s at %d", device_id, user_id, time_now ) # TODO: Sign the JSON with the server key changed = yield self.store.set_e2e_device_keys( user_id, device_id, time_now, device_keys, ) if changed: # Only notify about device updates *if* the keys actually changed yield self.device_handler.notify_device_update(user_id, [device_id]) one_time_keys = keys.get("one_time_keys", None) if one_time_keys: yield self._upload_one_time_keys_for_user( user_id, device_id, time_now, one_time_keys, ) # the device should have been registered already, but it may have been # deleted due to a race with a DELETE request. Or we may be using an # old access_token without an associated device_id. Either way, we # need to double-check the device is registered to avoid ending up with # keys without a corresponding device. yield self.device_handler.check_device_registered(user_id, device_id) result = yield self.store.count_e2e_one_time_keys(user_id, device_id) defer.returnValue({"one_time_key_counts": result}) @defer.inlineCallbacks def _upload_one_time_keys_for_user(self, user_id, device_id, time_now, one_time_keys): logger.info( "Adding one_time_keys %r for device %r for user %r at %d", one_time_keys.keys(), device_id, user_id, time_now, ) # make a list of (alg, id, key) tuples key_list = [] for key_id, key_obj in one_time_keys.items(): algorithm, key_id = key_id.split(":") key_list.append(( algorithm, key_id, key_obj )) # First we check if we have already persisted any of the keys. existing_key_map = yield self.store.get_e2e_one_time_keys( user_id, device_id, [k_id for _, k_id, _ in key_list] ) new_keys = [] # Keys that we need to insert. (alg, id, json) tuples. for algorithm, key_id, key in key_list: ex_json = existing_key_map.get((algorithm, key_id), None) if ex_json: if not _one_time_keys_match(ex_json, key): raise SynapseError( 400, ("One time key %s:%s already exists. " "Old key: %s; new key: %r") % (algorithm, key_id, ex_json, key) ) else: new_keys.append((algorithm, key_id, encode_canonical_json(key))) yield self.store.add_e2e_one_time_keys( user_id, device_id, time_now, new_keys ) def _one_time_keys_match(old_key_json, new_key): old_key = json.loads(old_key_json) # if either is a string rather than an object, they must match exactly if not isinstance(old_key, dict) or not isinstance(new_key, dict): return old_key == new_key # otherwise, we strip off the 'signatures' if any, because it's legitimate # for different upload attempts to have different signatures. old_key.pop("signatures", None) new_key_copy = dict(new_key) new_key_copy.pop("signatures", None) return old_key == new_key_copy synapse-0.24.0/synapse/handlers/events.py000066400000000000000000000121341317335640100204230ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.util.logutils import log_function from synapse.types import UserID from synapse.events.utils import serialize_event from synapse.api.constants import Membership, EventTypes from synapse.events import EventBase from ._base import BaseHandler import logging import random logger = logging.getLogger(__name__) class EventStreamHandler(BaseHandler): def __init__(self, hs): super(EventStreamHandler, self).__init__(hs) # Count of active streams per user self._streams_per_user = {} # Grace timers per user to delay the "stopped" signal self._stop_timer_per_user = {} self.distributor = hs.get_distributor() self.distributor.declare("started_user_eventstream") self.distributor.declare("stopped_user_eventstream") self.clock = hs.get_clock() self.notifier = hs.get_notifier() self.state = hs.get_state_handler() @defer.inlineCallbacks @log_function def get_stream(self, auth_user_id, pagin_config, timeout=0, as_client_event=True, affect_presence=True, only_keys=None, room_id=None, is_guest=False): """Fetches the events stream for a given user. If `only_keys` is not None, events from keys will be sent down. """ auth_user = UserID.from_string(auth_user_id) presence_handler = self.hs.get_presence_handler() context = yield presence_handler.user_syncing( auth_user_id, affect_presence=affect_presence, ) with context: if timeout: # If they've set a timeout set a minimum limit. timeout = max(timeout, 500) # Add some randomness to this value to try and mitigate against # thundering herds on restart. timeout = random.randint(int(timeout * 0.9), int(timeout * 1.1)) events, tokens = yield self.notifier.get_events_for( auth_user, pagin_config, timeout, only_keys=only_keys, is_guest=is_guest, explicit_room_id=room_id ) # When the user joins a new room, or another user joins a currently # joined room, we need to send down presence for those users. to_add = [] for event in events: if not isinstance(event, EventBase): continue if event.type == EventTypes.Member: if event.membership != Membership.JOIN: continue # Send down presence. if event.state_key == auth_user_id: # Send down presence for everyone in the room. users = yield self.state.get_current_user_in_room(event.room_id) states = yield presence_handler.get_states( users, as_event=True, ) to_add.extend(states) else: ev = yield presence_handler.get_state( UserID.from_string(event.state_key), as_event=True, ) to_add.append(ev) events.extend(to_add) time_now = self.clock.time_msec() chunks = [ serialize_event(e, time_now, as_client_event) for e in events ] chunk = { "chunk": chunks, "start": tokens[0].to_string(), "end": tokens[1].to_string(), } defer.returnValue(chunk) class EventHandler(BaseHandler): @defer.inlineCallbacks def get_event(self, user, event_id): """Retrieve a single specified event. Args: user (synapse.types.UserID): The user requesting the event event_id (str): The event ID to obtain. Returns: dict: An event, or None if there is no event matching this ID. Raises: SynapseError if there was a problem retrieving this event, or AuthError if the user does not have the rights to inspect this event. """ event = yield self.store.get_event(event_id) if not event: defer.returnValue(None) return if hasattr(event, "room_id"): yield self.auth.check_joined_room(event.room_id, user.to_string()) defer.returnValue(event) synapse-0.24.0/synapse/handlers/federation.py000066400000000000000000002372111317335640100212440ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Contains handlers for federation events.""" from signedjson.key import decode_verify_key_bytes from signedjson.sign import verify_signed_json from unpaddedbase64 import decode_base64 from ._base import BaseHandler from synapse.api.errors import ( AuthError, FederationError, StoreError, CodeMessageException, SynapseError, ) from synapse.api.constants import EventTypes, Membership, RejectedReason from synapse.events.validator import EventValidator from synapse.util import unwrapFirstError, logcontext from synapse.util.metrics import measure_func from synapse.util.logutils import log_function from synapse.util.async import run_on_reactor, Linearizer from synapse.util.frozenutils import unfreeze from synapse.crypto.event_signing import ( compute_event_signature, add_hashes_and_signatures, ) from synapse.types import UserID, get_domain_from_id from synapse.events.utils import prune_event from synapse.util.retryutils import NotRetryingDestination from synapse.util.distributor import user_joined_room from twisted.internet import defer import itertools import logging logger = logging.getLogger(__name__) class FederationHandler(BaseHandler): """Handles events that originated from federation. Responsible for: a) handling received Pdus before handing them on as Events to the rest of the home server (including auth and state conflict resoultion) b) converting events that were produced by local clients that may need to be sent to remote home servers. c) doing the necessary dances to invite remote users and join remote rooms. """ def __init__(self, hs): super(FederationHandler, self).__init__(hs) self.hs = hs self.store = hs.get_datastore() self.replication_layer = hs.get_replication_layer() self.state_handler = hs.get_state_handler() self.server_name = hs.hostname self.keyring = hs.get_keyring() self.action_generator = hs.get_action_generator() self.is_mine_id = hs.is_mine_id self.pusher_pool = hs.get_pusherpool() self.spam_checker = hs.get_spam_checker() self.replication_layer.set_handler(self) # When joining a room we need to queue any events for that room up self.room_queues = {} self._room_pdu_linearizer = Linearizer("fed_room_pdu") @defer.inlineCallbacks @log_function def on_receive_pdu(self, origin, pdu, get_missing=True): """ Process a PDU received via a federation /send/ transaction, or via backfill of missing prev_events Args: origin (str): server which initiated the /send/ transaction. Will be used to fetch missing events or state. pdu (FrozenEvent): received PDU get_missing (bool): True if we should fetch missing prev_events Returns (Deferred): completes with None """ # We reprocess pdus when we have seen them only as outliers existing = yield self.get_persisted_pdu( origin, pdu.event_id, do_auth=False ) # FIXME: Currently we fetch an event again when we already have it # if it has been marked as an outlier. already_seen = ( existing and ( not existing.internal_metadata.is_outlier() or pdu.internal_metadata.is_outlier() ) ) if already_seen: logger.debug("Already seen pdu %s", pdu.event_id) return # If we are currently in the process of joining this room, then we # queue up events for later processing. if pdu.room_id in self.room_queues: logger.info("Ignoring PDU %s for room %s from %s for now; join " "in progress", pdu.event_id, pdu.room_id, origin) self.room_queues[pdu.room_id].append((pdu, origin)) return # If we're no longer in the room just ditch the event entirely. This # is probably an old server that has come back and thinks we're still # in the room (or we've been rejoined to the room by a state reset). # # If we were never in the room then maybe our database got vaped and # we should check if we *are* in fact in the room. If we are then we # can magically rejoin the room. is_in_room = yield self.auth.check_host_in_room( pdu.room_id, self.server_name ) if not is_in_room: was_in_room = yield self.store.was_host_joined( pdu.room_id, self.server_name, ) if was_in_room: logger.info( "Ignoring PDU %s for room %s from %s as we've left the room!", pdu.event_id, pdu.room_id, origin, ) return state = None auth_chain = [] have_seen = yield self.store.have_events( [ev for ev, _ in pdu.prev_events] ) fetch_state = False # Get missing pdus if necessary. if not pdu.internal_metadata.is_outlier(): # We only backfill backwards to the min depth. min_depth = yield self.get_min_depth_for_context( pdu.room_id ) logger.debug( "_handle_new_pdu min_depth for %s: %d", pdu.room_id, min_depth ) prevs = {e_id for e_id, _ in pdu.prev_events} seen = set(have_seen.keys()) if min_depth and pdu.depth < min_depth: # This is so that we don't notify the user about this # message, to work around the fact that some events will # reference really really old events we really don't want to # send to the clients. pdu.internal_metadata.outlier = True elif min_depth and pdu.depth > min_depth: if get_missing and prevs - seen: # If we're missing stuff, ensure we only fetch stuff one # at a time. logger.info( "Acquiring lock for room %r to fetch %d missing events: %r...", pdu.room_id, len(prevs - seen), list(prevs - seen)[:5], ) with (yield self._room_pdu_linearizer.queue(pdu.room_id)): logger.info( "Acquired lock for room %r to fetch %d missing events", pdu.room_id, len(prevs - seen), ) yield self._get_missing_events_for_pdu( origin, pdu, prevs, min_depth ) # Update the set of things we've seen after trying to # fetch the missing stuff have_seen = yield self.store.have_events(prevs) seen = set(have_seen.iterkeys()) if not prevs - seen: logger.info( "Found all missing prev events for %s", pdu.event_id ) elif prevs - seen: logger.info( "Not fetching %d missing events for room %r,event %s: %r...", len(prevs - seen), pdu.room_id, pdu.event_id, list(prevs - seen)[:5], ) if prevs - seen: logger.info( "Still missing %d events for room %r: %r...", len(prevs - seen), pdu.room_id, list(prevs - seen)[:5] ) fetch_state = True if fetch_state: # We need to get the state at this event, since we haven't # processed all the prev events. logger.debug( "_handle_new_pdu getting state for %s", pdu.room_id ) try: state, auth_chain = yield self.replication_layer.get_state_for_room( origin, pdu.room_id, pdu.event_id, ) except: logger.exception("Failed to get state for event: %s", pdu.event_id) yield self._process_received_pdu( origin, pdu, state=state, auth_chain=auth_chain, ) @defer.inlineCallbacks def _get_missing_events_for_pdu(self, origin, pdu, prevs, min_depth): """ Args: origin (str): Origin of the pdu. Will be called to get the missing events pdu: received pdu prevs (set(str)): List of event ids which we are missing min_depth (int): Minimum depth of events to return. """ # We recalculate seen, since it may have changed. have_seen = yield self.store.have_events(prevs) seen = set(have_seen.keys()) if not prevs - seen: return latest = yield self.store.get_latest_event_ids_in_room( pdu.room_id ) # We add the prev events that we have seen to the latest # list to ensure the remote server doesn't give them to us latest = set(latest) latest |= seen logger.info( "Missing %d events for room %r pdu %s: %r...", len(prevs - seen), pdu.room_id, pdu.event_id, list(prevs - seen)[:5] ) # XXX: we set timeout to 10s to help workaround # https://github.com/matrix-org/synapse/issues/1733. # The reason is to avoid holding the linearizer lock # whilst processing inbound /send transactions, causing # FDs to stack up and block other inbound transactions # which empirically can currently take up to 30 minutes. # # N.B. this explicitly disables retry attempts. # # N.B. this also increases our chances of falling back to # fetching fresh state for the room if the missing event # can't be found, which slightly reduces our security. # it may also increase our DAG extremity count for the room, # causing additional state resolution? See #1760. # However, fetching state doesn't hold the linearizer lock # apparently. # # see https://github.com/matrix-org/synapse/pull/1744 missing_events = yield self.replication_layer.get_missing_events( origin, pdu.room_id, earliest_events_ids=list(latest), latest_events=[pdu], limit=10, min_depth=min_depth, timeout=10000, ) logger.info( "Got %d events: %r...", len(missing_events), [e.event_id for e in missing_events[:5]] ) # We want to sort these by depth so we process them and # tell clients about them in order. missing_events.sort(key=lambda x: x.depth) for e in missing_events: logger.info("Handling found event %s", e.event_id) yield self.on_receive_pdu( origin, e, get_missing=False ) @log_function @defer.inlineCallbacks def _process_received_pdu(self, origin, pdu, state, auth_chain): """ Called when we have a new pdu. We need to do auth checks and put it through the StateHandler. """ event = pdu logger.debug("Processing event: %s", event) # FIXME (erikj): Awful hack to make the case where we are not currently # in the room work # If state and auth_chain are None, then we don't need to do this check # as we already know we have enough state in the DB to handle this # event. if state and auth_chain and not event.internal_metadata.is_outlier(): is_in_room = yield self.auth.check_host_in_room( event.room_id, self.server_name ) else: is_in_room = True if not is_in_room: logger.info( "Got event for room we're not in: %r %r", event.room_id, event.event_id ) try: event_stream_id, max_stream_id = yield self._persist_auth_tree( origin, auth_chain, state, event ) except AuthError as e: raise FederationError( "ERROR", e.code, e.msg, affected=event.event_id, ) else: event_ids = set() if state: event_ids |= {e.event_id for e in state} if auth_chain: event_ids |= {e.event_id for e in auth_chain} seen_ids = set( (yield self.store.have_events(event_ids)).keys() ) if state and auth_chain is not None: # If we have any state or auth_chain given to us by the replication # layer, then we should handle them (if we haven't before.) event_infos = [] for e in itertools.chain(auth_chain, state): if e.event_id in seen_ids: continue e.internal_metadata.outlier = True auth_ids = [e_id for e_id, _ in e.auth_events] auth = { (e.type, e.state_key): e for e in auth_chain if e.event_id in auth_ids or e.type == EventTypes.Create } event_infos.append({ "event": e, "auth_events": auth, }) seen_ids.add(e.event_id) yield self._handle_new_events(origin, event_infos) try: context, event_stream_id, max_stream_id = yield self._handle_new_event( origin, event, state=state, ) except AuthError as e: raise FederationError( "ERROR", e.code, e.msg, affected=event.event_id, ) room = yield self.store.get_room(event.room_id) if not room: try: yield self.store.store_room( room_id=event.room_id, room_creator_user_id="", is_public=False, ) except StoreError: logger.exception("Failed to store room.") extra_users = [] if event.type == EventTypes.Member: target_user_id = event.state_key target_user = UserID.from_string(target_user_id) extra_users.append(target_user) self.notifier.on_new_room_event( event, event_stream_id, max_stream_id, extra_users=extra_users ) if event.type == EventTypes.Member: if event.membership == Membership.JOIN: # Only fire user_joined_room if the user has acutally # joined the room. Don't bother if the user is just # changing their profile info. newly_joined = True prev_state_id = context.prev_state_ids.get( (event.type, event.state_key) ) if prev_state_id: prev_state = yield self.store.get_event( prev_state_id, allow_none=True, ) if prev_state and prev_state.membership == Membership.JOIN: newly_joined = False if newly_joined: user = UserID.from_string(event.state_key) yield user_joined_room(self.distributor, user, event.room_id) @measure_func("_filter_events_for_server") @defer.inlineCallbacks def _filter_events_for_server(self, server_name, room_id, events): event_to_state_ids = yield self.store.get_state_ids_for_events( frozenset(e.event_id for e in events), types=( (EventTypes.RoomHistoryVisibility, ""), (EventTypes.Member, None), ) ) # We only want to pull out member events that correspond to the # server's domain. def check_match(id): try: return server_name == get_domain_from_id(id) except: return False # Parses mapping `event_id -> (type, state_key) -> state event_id` # to get all state ids that we're interested in. event_map = yield self.store.get_events([ e_id for key_to_eid in event_to_state_ids.values() for key, e_id in key_to_eid.items() if key[0] != EventTypes.Member or check_match(key[1]) ]) event_to_state = { e_id: { key: event_map[inner_e_id] for key, inner_e_id in key_to_eid.items() if inner_e_id in event_map } for e_id, key_to_eid in event_to_state_ids.items() } def redact_disallowed(event, state): if not state: return event history = state.get((EventTypes.RoomHistoryVisibility, ''), None) if history: visibility = history.content.get("history_visibility", "shared") if visibility in ["invited", "joined"]: # We now loop through all state events looking for # membership states for the requesting server to determine # if the server is either in the room or has been invited # into the room. for ev in state.values(): if ev.type != EventTypes.Member: continue try: domain = get_domain_from_id(ev.state_key) except: continue if domain != server_name: continue memtype = ev.membership if memtype == Membership.JOIN: return event elif memtype == Membership.INVITE: if visibility == "invited": return event else: return prune_event(event) return event defer.returnValue([ redact_disallowed(e, event_to_state[e.event_id]) for e in events ]) @log_function @defer.inlineCallbacks def backfill(self, dest, room_id, limit, extremities): """ Trigger a backfill request to `dest` for the given `room_id` This will attempt to get more events from the remote. This may return be successfull and still return no events if the other side has no new events to offer. """ if dest == self.server_name: raise SynapseError(400, "Can't backfill from self.") events = yield self.replication_layer.backfill( dest, room_id, limit=limit, extremities=extremities, ) # Don't bother processing events we already have. seen_events = yield self.store.have_events_in_timeline( set(e.event_id for e in events) ) events = [e for e in events if e.event_id not in seen_events] if not events: defer.returnValue([]) event_map = {e.event_id: e for e in events} event_ids = set(e.event_id for e in events) edges = [ ev.event_id for ev in events if set(e_id for e_id, _ in ev.prev_events) - event_ids ] logger.info( "backfill: Got %d events with %d edges", len(events), len(edges), ) # For each edge get the current state. auth_events = {} state_events = {} events_to_state = {} for e_id in edges: state, auth = yield self.replication_layer.get_state_for_room( destination=dest, room_id=room_id, event_id=e_id ) auth_events.update({a.event_id: a for a in auth}) auth_events.update({s.event_id: s for s in state}) state_events.update({s.event_id: s for s in state}) events_to_state[e_id] = state required_auth = set( a_id for event in events + state_events.values() + auth_events.values() for a_id, _ in event.auth_events ) auth_events.update({ e_id: event_map[e_id] for e_id in required_auth if e_id in event_map }) missing_auth = required_auth - set(auth_events) failed_to_fetch = set() # Try and fetch any missing auth events from both DB and remote servers. # We repeatedly do this until we stop finding new auth events. while missing_auth - failed_to_fetch: logger.info("Missing auth for backfill: %r", missing_auth) ret_events = yield self.store.get_events(missing_auth - failed_to_fetch) auth_events.update(ret_events) required_auth.update( a_id for event in ret_events.values() for a_id, _ in event.auth_events ) missing_auth = required_auth - set(auth_events) if missing_auth - failed_to_fetch: logger.info( "Fetching missing auth for backfill: %r", missing_auth - failed_to_fetch ) results = yield logcontext.make_deferred_yieldable(defer.gatherResults( [ logcontext.preserve_fn(self.replication_layer.get_pdu)( [dest], event_id, outlier=True, timeout=10000, ) for event_id in missing_auth - failed_to_fetch ], consumeErrors=True )).addErrback(unwrapFirstError) auth_events.update({a.event_id: a for a in results if a}) required_auth.update( a_id for event in results if event for a_id, _ in event.auth_events ) missing_auth = required_auth - set(auth_events) failed_to_fetch = missing_auth - set(auth_events) seen_events = yield self.store.have_events( set(auth_events.keys()) | set(state_events.keys()) ) ev_infos = [] for a in auth_events.values(): if a.event_id in seen_events: continue a.internal_metadata.outlier = True ev_infos.append({ "event": a, "auth_events": { (auth_events[a_id].type, auth_events[a_id].state_key): auth_events[a_id] for a_id, _ in a.auth_events if a_id in auth_events } }) for e_id in events_to_state: ev_infos.append({ "event": event_map[e_id], "state": events_to_state[e_id], "auth_events": { (auth_events[a_id].type, auth_events[a_id].state_key): auth_events[a_id] for a_id, _ in event_map[e_id].auth_events if a_id in auth_events } }) yield self._handle_new_events( dest, ev_infos, backfilled=True, ) events.sort(key=lambda e: e.depth) for event in events: if event in events_to_state: continue # We store these one at a time since each event depends on the # previous to work out the state. # TODO: We can probably do something more clever here. yield self._handle_new_event( dest, event, backfilled=True, ) defer.returnValue(events) @defer.inlineCallbacks def maybe_backfill(self, room_id, current_depth): """Checks the database to see if we should backfill before paginating, and if so do. """ extremities = yield self.store.get_oldest_events_with_depth_in_room( room_id ) if not extremities: logger.debug("Not backfilling as no extremeties found.") return # Check if we reached a point where we should start backfilling. sorted_extremeties_tuple = sorted( extremities.items(), key=lambda e: -int(e[1]) ) max_depth = sorted_extremeties_tuple[0][1] # We don't want to specify too many extremities as it causes the backfill # request URI to be too long. extremities = dict(sorted_extremeties_tuple[:5]) if current_depth > max_depth: logger.debug( "Not backfilling as we don't need to. %d < %d", max_depth, current_depth, ) return # Now we need to decide which hosts to hit first. # First we try hosts that are already in the room # TODO: HEURISTIC ALERT. curr_state = yield self.state_handler.get_current_state(room_id) def get_domains_from_state(state): joined_users = [ (state_key, int(event.depth)) for (e_type, state_key), event in state.items() if e_type == EventTypes.Member and event.membership == Membership.JOIN ] joined_domains = {} for u, d in joined_users: try: dom = get_domain_from_id(u) old_d = joined_domains.get(dom) if old_d: joined_domains[dom] = min(d, old_d) else: joined_domains[dom] = d except: pass return sorted(joined_domains.items(), key=lambda d: d[1]) curr_domains = get_domains_from_state(curr_state) likely_domains = [ domain for domain, depth in curr_domains if domain != self.server_name ] @defer.inlineCallbacks def try_backfill(domains): # TODO: Should we try multiple of these at a time? for dom in domains: try: yield self.backfill( dom, room_id, limit=100, extremities=[e for e in extremities.keys()] ) # If this succeeded then we probably already have the # appropriate stuff. # TODO: We can probably do something more intelligent here. defer.returnValue(True) except SynapseError as e: logger.info( "Failed to backfill from %s because %s", dom, e, ) continue except CodeMessageException as e: if 400 <= e.code < 500: raise logger.info( "Failed to backfill from %s because %s", dom, e, ) continue except NotRetryingDestination as e: logger.info(e.message) continue except Exception as e: logger.exception( "Failed to backfill from %s because %s", dom, e, ) continue defer.returnValue(False) success = yield try_backfill(likely_domains) if success: defer.returnValue(True) # Huh, well *those* domains didn't work out. Lets try some domains # from the time. tried_domains = set(likely_domains) tried_domains.add(self.server_name) event_ids = list(extremities.keys()) logger.debug("calling resolve_state_groups in _maybe_backfill") states = yield logcontext.make_deferred_yieldable(defer.gatherResults( [ logcontext.preserve_fn(self.state_handler.resolve_state_groups)( room_id, [e] ) for e in event_ids ], consumeErrors=True, )) states = dict(zip(event_ids, [s.state for s in states])) state_map = yield self.store.get_events( [e_id for ids in states.values() for e_id in ids], get_prev_content=False ) states = { key: { k: state_map[e_id] for k, e_id in state_dict.items() if e_id in state_map } for key, state_dict in states.items() } for e_id, _ in sorted_extremeties_tuple: likely_domains = get_domains_from_state(states[e_id]) success = yield try_backfill([ dom for dom in likely_domains if dom not in tried_domains ]) if success: defer.returnValue(True) tried_domains.update(likely_domains) defer.returnValue(False) @defer.inlineCallbacks def send_invite(self, target_host, event): """ Sends the invite to the remote server for signing. Invites must be signed by the invitee's server before distribution. """ pdu = yield self.replication_layer.send_invite( destination=target_host, room_id=event.room_id, event_id=event.event_id, pdu=event ) defer.returnValue(pdu) @defer.inlineCallbacks def on_event_auth(self, event_id): event = yield self.store.get_event(event_id) auth = yield self.store.get_auth_chain( [auth_id for auth_id, _ in event.auth_events], include_given=True ) for event in auth: event.signatures.update( compute_event_signature( event, self.hs.hostname, self.hs.config.signing_key[0] ) ) defer.returnValue([e for e in auth]) @log_function @defer.inlineCallbacks def do_invite_join(self, target_hosts, room_id, joinee, content): """ Attempts to join the `joinee` to the room `room_id` via the server `target_host`. This first triggers a /make_join/ request that returns a partial event that we can fill out and sign. This is then sent to the remote server via /send_join/ which responds with the state at that event and the auth_chains. We suspend processing of any received events from this room until we have finished processing the join. """ logger.debug("Joining %s to %s", joinee, room_id) origin, event = yield self._make_and_verify_event( target_hosts, room_id, joinee, "join", content, ) # This shouldn't happen, because the RoomMemberHandler has a # linearizer lock which only allows one operation per user per room # at a time - so this is just paranoia. assert (room_id not in self.room_queues) self.room_queues[room_id] = [] yield self.store.clean_room_for_join(room_id) handled_events = set() try: event = self._sign_event(event) # Try the host we successfully got a response to /make_join/ # request first. try: target_hosts.remove(origin) target_hosts.insert(0, origin) except ValueError: pass ret = yield self.replication_layer.send_join(target_hosts, event) origin = ret["origin"] state = ret["state"] auth_chain = ret["auth_chain"] auth_chain.sort(key=lambda e: e.depth) handled_events.update([s.event_id for s in state]) handled_events.update([a.event_id for a in auth_chain]) handled_events.add(event.event_id) logger.debug("do_invite_join auth_chain: %s", auth_chain) logger.debug("do_invite_join state: %s", state) logger.debug("do_invite_join event: %s", event) try: yield self.store.store_room( room_id=room_id, room_creator_user_id="", is_public=False ) except: # FIXME pass event_stream_id, max_stream_id = yield self._persist_auth_tree( origin, auth_chain, state, event ) self.notifier.on_new_room_event( event, event_stream_id, max_stream_id, extra_users=[joinee] ) logger.debug("Finished joining %s to %s", joinee, room_id) finally: room_queue = self.room_queues[room_id] del self.room_queues[room_id] # we don't need to wait for the queued events to be processed - # it's just a best-effort thing at this point. We do want to do # them roughly in order, though, otherwise we'll end up making # lots of requests for missing prev_events which we do actually # have. Hence we fire off the deferred, but don't wait for it. logcontext.preserve_fn(self._handle_queued_pdus)(room_queue) defer.returnValue(True) @defer.inlineCallbacks def _handle_queued_pdus(self, room_queue): """Process PDUs which got queued up while we were busy send_joining. Args: room_queue (list[FrozenEvent, str]): list of PDUs to be processed and the servers that sent them """ for p, origin in room_queue: try: logger.info("Processing queued PDU %s which was received " "while we were joining %s", p.event_id, p.room_id) yield self.on_receive_pdu(origin, p) except Exception as e: logger.warn( "Error handling queued PDU %s from %s: %s", p.event_id, origin, e) @defer.inlineCallbacks @log_function def on_make_join_request(self, room_id, user_id): """ We've received a /make_join/ request, so we create a partial join event for the room and return that. We do *not* persist or process it until the other server has signed it and sent it back. """ event_content = {"membership": Membership.JOIN} builder = self.event_builder_factory.new({ "type": EventTypes.Member, "content": event_content, "room_id": room_id, "sender": user_id, "state_key": user_id, }) try: message_handler = self.hs.get_handlers().message_handler event, context = yield message_handler._create_new_client_event( builder=builder, ) except AuthError as e: logger.warn("Failed to create join %r because %s", event, e) raise e # The remote hasn't signed it yet, obviously. We'll do the full checks # when we get the event back in `on_send_join_request` yield self.auth.check_from_context(event, context, do_sig_check=False) defer.returnValue(event) @defer.inlineCallbacks @log_function def on_send_join_request(self, origin, pdu): """ We have received a join event for a room. Fully process it and respond with the current state and auth chains. """ event = pdu logger.debug( "on_send_join_request: Got event: %s, signatures: %s", event.event_id, event.signatures, ) event.internal_metadata.outlier = False # Send this event on behalf of the origin server. # # The reasons we have the destination server rather than the origin # server send it are slightly mysterious: the origin server should have # all the neccessary state once it gets the response to the send_join, # so it could send the event itself if it wanted to. It may be that # doing it this way reduces failure modes, or avoids certain attacks # where a new server selectively tells a subset of the federation that # it has joined. # # The fact is that, as of the current writing, Synapse doesn't send out # the join event over federation after joining, and changing it now # would introduce the danger of backwards-compatibility problems. event.internal_metadata.send_on_behalf_of = origin context, event_stream_id, max_stream_id = yield self._handle_new_event( origin, event ) logger.debug( "on_send_join_request: After _handle_new_event: %s, sigs: %s", event.event_id, event.signatures, ) extra_users = [] if event.type == EventTypes.Member: target_user_id = event.state_key target_user = UserID.from_string(target_user_id) extra_users.append(target_user) self.notifier.on_new_room_event( event, event_stream_id, max_stream_id, extra_users=extra_users ) if event.type == EventTypes.Member: if event.content["membership"] == Membership.JOIN: user = UserID.from_string(event.state_key) yield user_joined_room(self.distributor, user, event.room_id) state_ids = context.prev_state_ids.values() auth_chain = yield self.store.get_auth_chain(state_ids) state = yield self.store.get_events(context.prev_state_ids.values()) defer.returnValue({ "state": state.values(), "auth_chain": auth_chain, }) @defer.inlineCallbacks def on_invite_request(self, origin, pdu): """ We've got an invite event. Process and persist it. Sign it. Respond with the now signed event. """ event = pdu if event.state_key is None: raise SynapseError(400, "The invite event did not have a state key") is_blocked = yield self.store.is_room_blocked(event.room_id) if is_blocked: raise SynapseError(403, "This room has been blocked on this server") if self.hs.config.block_non_admin_invites: raise SynapseError(403, "This server does not accept room invites") if not self.spam_checker.user_may_invite( event.sender, event.state_key, event.room_id, ): raise SynapseError( 403, "This user is not permitted to send invites to this server/user" ) membership = event.content.get("membership") if event.type != EventTypes.Member or membership != Membership.INVITE: raise SynapseError(400, "The event was not an m.room.member invite event") sender_domain = get_domain_from_id(event.sender) if sender_domain != origin: raise SynapseError(400, "The invite event was not from the server sending it") if not self.is_mine_id(event.state_key): raise SynapseError(400, "The invite event must be for this server") event.internal_metadata.outlier = True event.internal_metadata.invite_from_remote = True event.signatures.update( compute_event_signature( event, self.hs.hostname, self.hs.config.signing_key[0] ) ) context = yield self.state_handler.compute_event_context(event) event_stream_id, max_stream_id = yield self.store.persist_event( event, context=context, ) target_user = UserID.from_string(event.state_key) self.notifier.on_new_room_event( event, event_stream_id, max_stream_id, extra_users=[target_user], ) defer.returnValue(event) @defer.inlineCallbacks def do_remotely_reject_invite(self, target_hosts, room_id, user_id): origin, event = yield self._make_and_verify_event( target_hosts, room_id, user_id, "leave" ) # Mark as outlier as we don't have any state for this event; we're not # even in the room. event.internal_metadata.outlier = True event = self._sign_event(event) # Try the host that we succesfully called /make_leave/ on first for # the /send_leave/ request. try: target_hosts.remove(origin) target_hosts.insert(0, origin) except ValueError: pass yield self.replication_layer.send_leave( target_hosts, event ) context = yield self.state_handler.compute_event_context(event) event_stream_id, max_stream_id = yield self.store.persist_event( event, context=context, ) target_user = UserID.from_string(event.state_key) self.notifier.on_new_room_event( event, event_stream_id, max_stream_id, extra_users=[target_user], ) defer.returnValue(event) @defer.inlineCallbacks def _make_and_verify_event(self, target_hosts, room_id, user_id, membership, content={},): origin, pdu = yield self.replication_layer.make_membership_event( target_hosts, room_id, user_id, membership, content, ) logger.debug("Got response to make_%s: %s", membership, pdu) event = pdu # We should assert some things. # FIXME: Do this in a nicer way assert(event.type == EventTypes.Member) assert(event.user_id == user_id) assert(event.state_key == user_id) assert(event.room_id == room_id) defer.returnValue((origin, event)) def _sign_event(self, event): event.internal_metadata.outlier = False builder = self.event_builder_factory.new( unfreeze(event.get_pdu_json()) ) builder.event_id = self.event_builder_factory.create_event_id() builder.origin = self.hs.hostname if not hasattr(event, "signatures"): builder.signatures = {} add_hashes_and_signatures( builder, self.hs.hostname, self.hs.config.signing_key[0], ) return builder.build() @defer.inlineCallbacks @log_function def on_make_leave_request(self, room_id, user_id): """ We've received a /make_leave/ request, so we create a partial join event for the room and return that. We do *not* persist or process it until the other server has signed it and sent it back. """ builder = self.event_builder_factory.new({ "type": EventTypes.Member, "content": {"membership": Membership.LEAVE}, "room_id": room_id, "sender": user_id, "state_key": user_id, }) message_handler = self.hs.get_handlers().message_handler event, context = yield message_handler._create_new_client_event( builder=builder, ) try: # The remote hasn't signed it yet, obviously. We'll do the full checks # when we get the event back in `on_send_leave_request` yield self.auth.check_from_context(event, context, do_sig_check=False) except AuthError as e: logger.warn("Failed to create new leave %r because %s", event, e) raise e defer.returnValue(event) @defer.inlineCallbacks @log_function def on_send_leave_request(self, origin, pdu): """ We have received a leave event for a room. Fully process it.""" event = pdu logger.debug( "on_send_leave_request: Got event: %s, signatures: %s", event.event_id, event.signatures, ) event.internal_metadata.outlier = False context, event_stream_id, max_stream_id = yield self._handle_new_event( origin, event ) logger.debug( "on_send_leave_request: After _handle_new_event: %s, sigs: %s", event.event_id, event.signatures, ) extra_users = [] if event.type == EventTypes.Member: target_user_id = event.state_key target_user = UserID.from_string(target_user_id) extra_users.append(target_user) self.notifier.on_new_room_event( event, event_stream_id, max_stream_id, extra_users=extra_users ) defer.returnValue(None) @defer.inlineCallbacks def get_state_for_pdu(self, room_id, event_id): """Returns the state at the event. i.e. not including said event. """ yield run_on_reactor() state_groups = yield self.store.get_state_groups( room_id, [event_id] ) if state_groups: _, state = state_groups.items().pop() results = { (e.type, e.state_key): e for e in state } event = yield self.store.get_event(event_id) if event and event.is_state(): # Get previous state if "replaces_state" in event.unsigned: prev_id = event.unsigned["replaces_state"] if prev_id != event.event_id: prev_event = yield self.store.get_event(prev_id) results[(event.type, event.state_key)] = prev_event else: del results[(event.type, event.state_key)] res = results.values() for event in res: # We sign these again because there was a bug where we # incorrectly signed things the first time round if self.is_mine_id(event.event_id): event.signatures.update( compute_event_signature( event, self.hs.hostname, self.hs.config.signing_key[0] ) ) defer.returnValue(res) else: defer.returnValue([]) @defer.inlineCallbacks def get_state_ids_for_pdu(self, room_id, event_id): """Returns the state at the event. i.e. not including said event. """ yield run_on_reactor() state_groups = yield self.store.get_state_groups_ids( room_id, [event_id] ) if state_groups: _, state = state_groups.items().pop() results = state event = yield self.store.get_event(event_id) if event and event.is_state(): # Get previous state if "replaces_state" in event.unsigned: prev_id = event.unsigned["replaces_state"] if prev_id != event.event_id: results[(event.type, event.state_key)] = prev_id else: results.pop((event.type, event.state_key), None) defer.returnValue(results.values()) else: defer.returnValue([]) @defer.inlineCallbacks @log_function def on_backfill_request(self, origin, room_id, pdu_list, limit): in_room = yield self.auth.check_host_in_room(room_id, origin) if not in_room: raise AuthError(403, "Host not in room.") events = yield self.store.get_backfill_events( room_id, pdu_list, limit ) events = yield self._filter_events_for_server(origin, room_id, events) defer.returnValue(events) @defer.inlineCallbacks @log_function def get_persisted_pdu(self, origin, event_id, do_auth=True): """ Get a PDU from the database with given origin and id. Returns: Deferred: Results in a `Pdu`. """ event = yield self.store.get_event( event_id, allow_none=True, allow_rejected=True, ) if event: if self.is_mine_id(event.event_id): # FIXME: This is a temporary work around where we occasionally # return events slightly differently than when they were # originally signed event.signatures.update( compute_event_signature( event, self.hs.hostname, self.hs.config.signing_key[0] ) ) if do_auth: in_room = yield self.auth.check_host_in_room( event.room_id, origin ) if not in_room: raise AuthError(403, "Host not in room.") events = yield self._filter_events_for_server( origin, event.room_id, [event] ) event = events[0] defer.returnValue(event) else: defer.returnValue(None) @log_function def get_min_depth_for_context(self, context): return self.store.get_min_depth(context) @defer.inlineCallbacks @log_function def _handle_new_event(self, origin, event, state=None, auth_events=None, backfilled=False): context = yield self._prep_event( origin, event, state=state, auth_events=auth_events, ) if not event.internal_metadata.is_outlier() and not backfilled: yield self.action_generator.handle_push_actions_for_event( event, context ) event_stream_id, max_stream_id = yield self.store.persist_event( event, context=context, backfilled=backfilled, ) if not backfilled: # this intentionally does not yield: we don't care about the result # and don't need to wait for it. logcontext.preserve_fn(self.pusher_pool.on_new_notifications)( event_stream_id, max_stream_id ) defer.returnValue((context, event_stream_id, max_stream_id)) @defer.inlineCallbacks def _handle_new_events(self, origin, event_infos, backfilled=False): """Creates the appropriate contexts and persists events. The events should not depend on one another, e.g. this should be used to persist a bunch of outliers, but not a chunk of individual events that depend on each other for state calculations. """ contexts = yield logcontext.make_deferred_yieldable(defer.gatherResults( [ logcontext.preserve_fn(self._prep_event)( origin, ev_info["event"], state=ev_info.get("state"), auth_events=ev_info.get("auth_events"), ) for ev_info in event_infos ], consumeErrors=True, )) yield self.store.persist_events( [ (ev_info["event"], context) for ev_info, context in itertools.izip(event_infos, contexts) ], backfilled=backfilled, ) @defer.inlineCallbacks def _persist_auth_tree(self, origin, auth_events, state, event): """Checks the auth chain is valid (and passes auth checks) for the state and event. Then persists the auth chain and state atomically. Persists the event seperately. Will attempt to fetch missing auth events. Args: origin (str): Where the events came from auth_events (list) state (list) event (Event) Returns: 2-tuple of (event_stream_id, max_stream_id) from the persist_event call for `event` """ events_to_context = {} for e in itertools.chain(auth_events, state): e.internal_metadata.outlier = True ctx = yield self.state_handler.compute_event_context(e) events_to_context[e.event_id] = ctx event_map = { e.event_id: e for e in itertools.chain(auth_events, state, [event]) } create_event = None for e in auth_events: if (e.type, e.state_key) == (EventTypes.Create, ""): create_event = e break missing_auth_events = set() for e in itertools.chain(auth_events, state, [event]): for e_id, _ in e.auth_events: if e_id not in event_map: missing_auth_events.add(e_id) for e_id in missing_auth_events: m_ev = yield self.replication_layer.get_pdu( [origin], e_id, outlier=True, timeout=10000, ) if m_ev and m_ev.event_id == e_id: event_map[e_id] = m_ev else: logger.info("Failed to find auth event %r", e_id) for e in itertools.chain(auth_events, state, [event]): auth_for_e = { (event_map[e_id].type, event_map[e_id].state_key): event_map[e_id] for e_id, _ in e.auth_events if e_id in event_map } if create_event: auth_for_e[(EventTypes.Create, "")] = create_event try: self.auth.check(e, auth_events=auth_for_e) except SynapseError as err: # we may get SynapseErrors here as well as AuthErrors. For # instance, there are a couple of (ancient) events in some # rooms whose senders do not have the correct sigil; these # cause SynapseErrors in auth.check. We don't want to give up # the attempt to federate altogether in such cases. logger.warn( "Rejecting %s because %s", e.event_id, err.msg ) if e == event: raise events_to_context[e.event_id].rejected = RejectedReason.AUTH_ERROR yield self.store.persist_events( [ (e, events_to_context[e.event_id]) for e in itertools.chain(auth_events, state) ], ) new_event_context = yield self.state_handler.compute_event_context( event, old_state=state ) event_stream_id, max_stream_id = yield self.store.persist_event( event, new_event_context, ) defer.returnValue((event_stream_id, max_stream_id)) @defer.inlineCallbacks def _prep_event(self, origin, event, state=None, auth_events=None): """ Args: origin: event: state: auth_events: Returns: Deferred, which resolves to synapse.events.snapshot.EventContext """ context = yield self.state_handler.compute_event_context( event, old_state=state, ) if not auth_events: auth_events_ids = yield self.auth.compute_auth_events( event, context.prev_state_ids, for_verification=True, ) auth_events = yield self.store.get_events(auth_events_ids) auth_events = { (e.type, e.state_key): e for e in auth_events.values() } # This is a hack to fix some old rooms where the initial join event # didn't reference the create event in its auth events. if event.type == EventTypes.Member and not event.auth_events: if len(event.prev_events) == 1 and event.depth < 5: c = yield self.store.get_event( event.prev_events[0][0], allow_none=True, ) if c and c.type == EventTypes.Create: auth_events[(c.type, c.state_key)] = c try: yield self.do_auth( origin, event, context, auth_events=auth_events ) except AuthError as e: logger.warn( "Rejecting %s because %s", event.event_id, e.msg ) context.rejected = RejectedReason.AUTH_ERROR if event.type == EventTypes.GuestAccess and not context.rejected: yield self.maybe_kick_guest_users(event) defer.returnValue(context) @defer.inlineCallbacks def on_query_auth(self, origin, event_id, remote_auth_chain, rejects, missing): # Just go through and process each event in `remote_auth_chain`. We # don't want to fall into the trap of `missing` being wrong. for e in remote_auth_chain: try: yield self._handle_new_event(origin, e) except AuthError: pass # Now get the current auth_chain for the event. event = yield self.store.get_event(event_id) local_auth_chain = yield self.store.get_auth_chain( [auth_id for auth_id, _ in event.auth_events], include_given=True ) # TODO: Check if we would now reject event_id. If so we need to tell # everyone. ret = yield self.construct_auth_difference( local_auth_chain, remote_auth_chain ) for event in ret["auth_chain"]: event.signatures.update( compute_event_signature( event, self.hs.hostname, self.hs.config.signing_key[0] ) ) logger.debug("on_query_auth returning: %s", ret) defer.returnValue(ret) @defer.inlineCallbacks def on_get_missing_events(self, origin, room_id, earliest_events, latest_events, limit, min_depth): in_room = yield self.auth.check_host_in_room( room_id, origin ) if not in_room: raise AuthError(403, "Host not in room.") limit = min(limit, 20) min_depth = max(min_depth, 0) missing_events = yield self.store.get_missing_events( room_id=room_id, earliest_events=earliest_events, latest_events=latest_events, limit=limit, min_depth=min_depth, ) defer.returnValue(missing_events) @defer.inlineCallbacks @log_function def do_auth(self, origin, event, context, auth_events): # Check if we have all the auth events. current_state = set(e.event_id for e in auth_events.values()) event_auth_events = set(e_id for e_id, _ in event.auth_events) if event.is_state(): event_key = (event.type, event.state_key) else: event_key = None if event_auth_events - current_state: have_events = yield self.store.have_events( event_auth_events - current_state ) else: have_events = {} have_events.update({ e.event_id: "" for e in auth_events.values() }) seen_events = set(have_events.keys()) missing_auth = event_auth_events - seen_events - current_state if missing_auth: logger.info("Missing auth: %s", missing_auth) # If we don't have all the auth events, we need to get them. try: remote_auth_chain = yield self.replication_layer.get_event_auth( origin, event.room_id, event.event_id ) seen_remotes = yield self.store.have_events( [e.event_id for e in remote_auth_chain] ) for e in remote_auth_chain: if e.event_id in seen_remotes.keys(): continue if e.event_id == event.event_id: continue try: auth_ids = [e_id for e_id, _ in e.auth_events] auth = { (e.type, e.state_key): e for e in remote_auth_chain if e.event_id in auth_ids or e.type == EventTypes.Create } e.internal_metadata.outlier = True logger.debug( "do_auth %s missing_auth: %s", event.event_id, e.event_id ) yield self._handle_new_event( origin, e, auth_events=auth ) if e.event_id in event_auth_events: auth_events[(e.type, e.state_key)] = e except AuthError: pass have_events = yield self.store.have_events( [e_id for e_id, _ in event.auth_events] ) seen_events = set(have_events.keys()) except: # FIXME: logger.exception("Failed to get auth chain") # FIXME: Assumes we have and stored all the state for all the # prev_events current_state = set(e.event_id for e in auth_events.values()) different_auth = event_auth_events - current_state if different_auth and not event.internal_metadata.is_outlier(): # Do auth conflict res. logger.info("Different auth: %s", different_auth) different_events = yield logcontext.make_deferred_yieldable( defer.gatherResults([ logcontext.preserve_fn(self.store.get_event)( d, allow_none=True, allow_rejected=False, ) for d in different_auth if d in have_events and not have_events[d] ], consumeErrors=True) ).addErrback(unwrapFirstError) if different_events: local_view = dict(auth_events) remote_view = dict(auth_events) remote_view.update({ (d.type, d.state_key): d for d in different_events if d }) new_state = self.state_handler.resolve_events( [local_view.values(), remote_view.values()], event ) auth_events.update(new_state) current_state = set(e.event_id for e in auth_events.values()) different_auth = event_auth_events - current_state context.current_state_ids = dict(context.current_state_ids) context.current_state_ids.update({ k: a.event_id for k, a in auth_events.items() if k != event_key }) context.prev_state_ids = dict(context.prev_state_ids) context.prev_state_ids.update({ k: a.event_id for k, a in auth_events.items() }) context.state_group = self.store.get_next_state_group() if different_auth and not event.internal_metadata.is_outlier(): logger.info("Different auth after resolution: %s", different_auth) # Only do auth resolution if we have something new to say. # We can't rove an auth failure. do_resolution = False provable = [ RejectedReason.NOT_ANCESTOR, RejectedReason.NOT_ANCESTOR, ] for e_id in different_auth: if e_id in have_events: if have_events[e_id] in provable: do_resolution = True break if do_resolution: # 1. Get what we think is the auth chain. auth_ids = yield self.auth.compute_auth_events( event, context.prev_state_ids ) local_auth_chain = yield self.store.get_auth_chain( auth_ids, include_given=True ) try: # 2. Get remote difference. result = yield self.replication_layer.query_auth( origin, event.room_id, event.event_id, local_auth_chain, ) seen_remotes = yield self.store.have_events( [e.event_id for e in result["auth_chain"]] ) # 3. Process any remote auth chain events we haven't seen. for ev in result["auth_chain"]: if ev.event_id in seen_remotes.keys(): continue if ev.event_id == event.event_id: continue try: auth_ids = [e_id for e_id, _ in ev.auth_events] auth = { (e.type, e.state_key): e for e in result["auth_chain"] if e.event_id in auth_ids or event.type == EventTypes.Create } ev.internal_metadata.outlier = True logger.debug( "do_auth %s different_auth: %s", event.event_id, e.event_id ) yield self._handle_new_event( origin, ev, auth_events=auth ) if ev.event_id in event_auth_events: auth_events[(ev.type, ev.state_key)] = ev except AuthError: pass except: # FIXME: logger.exception("Failed to query auth chain") # 4. Look at rejects and their proofs. # TODO. context.current_state_ids = dict(context.current_state_ids) context.current_state_ids.update({ k: a.event_id for k, a in auth_events.items() if k != event_key }) context.prev_state_ids = dict(context.prev_state_ids) context.prev_state_ids.update({ k: a.event_id for k, a in auth_events.items() }) context.state_group = self.store.get_next_state_group() try: self.auth.check(event, auth_events=auth_events) except AuthError as e: logger.warn("Failed auth resolution for %r because %s", event, e) raise e @defer.inlineCallbacks def construct_auth_difference(self, local_auth, remote_auth): """ Given a local and remote auth chain, find the differences. This assumes that we have already processed all events in remote_auth Params: local_auth (list) remote_auth (list) Returns: dict """ logger.debug("construct_auth_difference Start!") # TODO: Make sure we are OK with local_auth or remote_auth having more # auth events in them than strictly necessary. def sort_fun(ev): return ev.depth, ev.event_id logger.debug("construct_auth_difference after sort_fun!") # We find the differences by starting at the "bottom" of each list # and iterating up on both lists. The lists are ordered by depth and # then event_id, we iterate up both lists until we find the event ids # don't match. Then we look at depth/event_id to see which side is # missing that event, and iterate only up that list. Repeat. remote_list = list(remote_auth) remote_list.sort(key=sort_fun) local_list = list(local_auth) local_list.sort(key=sort_fun) local_iter = iter(local_list) remote_iter = iter(remote_list) logger.debug("construct_auth_difference before get_next!") def get_next(it, opt=None): try: return it.next() except: return opt current_local = get_next(local_iter) current_remote = get_next(remote_iter) logger.debug("construct_auth_difference before while") missing_remotes = [] missing_locals = [] while current_local or current_remote: if current_remote is None: missing_locals.append(current_local) current_local = get_next(local_iter) continue if current_local is None: missing_remotes.append(current_remote) current_remote = get_next(remote_iter) continue if current_local.event_id == current_remote.event_id: current_local = get_next(local_iter) current_remote = get_next(remote_iter) continue if current_local.depth < current_remote.depth: missing_locals.append(current_local) current_local = get_next(local_iter) continue if current_local.depth > current_remote.depth: missing_remotes.append(current_remote) current_remote = get_next(remote_iter) continue # They have the same depth, so we fall back to the event_id order if current_local.event_id < current_remote.event_id: missing_locals.append(current_local) current_local = get_next(local_iter) if current_local.event_id > current_remote.event_id: missing_remotes.append(current_remote) current_remote = get_next(remote_iter) continue logger.debug("construct_auth_difference after while") # missing locals should be sent to the server # We should find why we are missing remotes, as they will have been # rejected. # Remove events from missing_remotes if they are referencing a missing # remote. We only care about the "root" rejected ones. missing_remote_ids = [e.event_id for e in missing_remotes] base_remote_rejected = list(missing_remotes) for e in missing_remotes: for e_id, _ in e.auth_events: if e_id in missing_remote_ids: try: base_remote_rejected.remove(e) except ValueError: pass reason_map = {} for e in base_remote_rejected: reason = yield self.store.get_rejection_reason(e.event_id) if reason is None: # TODO: e is not in the current state, so we should # construct some proof of that. continue reason_map[e.event_id] = reason if reason == RejectedReason.AUTH_ERROR: pass elif reason == RejectedReason.REPLACED: # TODO: Get proof pass elif reason == RejectedReason.NOT_ANCESTOR: # TODO: Get proof. pass logger.debug("construct_auth_difference returning") defer.returnValue({ "auth_chain": local_auth, "rejects": { e.event_id: { "reason": reason_map[e.event_id], "proof": None, } for e in base_remote_rejected }, "missing": [e.event_id for e in missing_locals], }) @defer.inlineCallbacks @log_function def exchange_third_party_invite( self, sender_user_id, target_user_id, room_id, signed, ): third_party_invite = { "signed": signed, } event_dict = { "type": EventTypes.Member, "content": { "membership": Membership.INVITE, "third_party_invite": third_party_invite, }, "room_id": room_id, "sender": sender_user_id, "state_key": target_user_id, } if (yield self.auth.check_host_in_room(room_id, self.hs.hostname)): builder = self.event_builder_factory.new(event_dict) EventValidator().validate_new(builder) message_handler = self.hs.get_handlers().message_handler event, context = yield message_handler._create_new_client_event( builder=builder ) event, context = yield self.add_display_name_to_third_party_invite( event_dict, event, context ) try: yield self.auth.check_from_context(event, context) except AuthError as e: logger.warn("Denying new third party invite %r because %s", event, e) raise e yield self._check_signature(event, context) member_handler = self.hs.get_handlers().room_member_handler yield member_handler.send_membership_event(None, event, context) else: destinations = set(x.split(":", 1)[-1] for x in (sender_user_id, room_id)) yield self.replication_layer.forward_third_party_invite( destinations, room_id, event_dict, ) @defer.inlineCallbacks @log_function def on_exchange_third_party_invite_request(self, origin, room_id, event_dict): """Handle an exchange_third_party_invite request from a remote server The remote server will call this when it wants to turn a 3pid invite into a normal m.room.member invite. Returns: Deferred: resolves (to None) """ builder = self.event_builder_factory.new(event_dict) message_handler = self.hs.get_handlers().message_handler event, context = yield message_handler._create_new_client_event( builder=builder, ) event, context = yield self.add_display_name_to_third_party_invite( event_dict, event, context ) try: self.auth.check_from_context(event, context) except AuthError as e: logger.warn("Denying third party invite %r because %s", event, e) raise e yield self._check_signature(event, context) # XXX we send the invite here, but send_membership_event also sends it, # so we end up making two requests. I think this is redundant. returned_invite = yield self.send_invite(origin, event) # TODO: Make sure the signatures actually are correct. event.signatures.update(returned_invite.signatures) member_handler = self.hs.get_handlers().room_member_handler yield member_handler.send_membership_event(None, event, context) @defer.inlineCallbacks def add_display_name_to_third_party_invite(self, event_dict, event, context): key = ( EventTypes.ThirdPartyInvite, event.content["third_party_invite"]["signed"]["token"] ) original_invite = None original_invite_id = context.prev_state_ids.get(key) if original_invite_id: original_invite = yield self.store.get_event( original_invite_id, allow_none=True ) if original_invite: display_name = original_invite.content["display_name"] event_dict["content"]["third_party_invite"]["display_name"] = display_name else: logger.info( "Could not find invite event for third_party_invite: %r", event_dict ) # We don't discard here as this is not the appropriate place to do # auth checks. If we need the invite and don't have it then the # auth check code will explode appropriately. builder = self.event_builder_factory.new(event_dict) EventValidator().validate_new(builder) message_handler = self.hs.get_handlers().message_handler event, context = yield message_handler._create_new_client_event(builder=builder) defer.returnValue((event, context)) @defer.inlineCallbacks def _check_signature(self, event, context): """ Checks that the signature in the event is consistent with its invite. Args: event (Event): The m.room.member event to check context (EventContext): Raises: AuthError: if signature didn't match any keys, or key has been revoked, SynapseError: if a transient error meant a key couldn't be checked for revocation. """ signed = event.content["third_party_invite"]["signed"] token = signed["token"] invite_event_id = context.prev_state_ids.get( (EventTypes.ThirdPartyInvite, token,) ) invite_event = None if invite_event_id: invite_event = yield self.store.get_event(invite_event_id, allow_none=True) if not invite_event: raise AuthError(403, "Could not find invite") last_exception = None for public_key_object in self.hs.get_auth().get_public_keys(invite_event): try: for server, signature_block in signed["signatures"].items(): for key_name, encoded_signature in signature_block.items(): if not key_name.startswith("ed25519:"): continue public_key = public_key_object["public_key"] verify_key = decode_verify_key_bytes( key_name, decode_base64(public_key) ) verify_signed_json(signed, server, verify_key) if "key_validity_url" in public_key_object: yield self._check_key_revocation( public_key, public_key_object["key_validity_url"] ) return except Exception as e: last_exception = e raise last_exception @defer.inlineCallbacks def _check_key_revocation(self, public_key, url): """ Checks whether public_key has been revoked. Args: public_key (str): base-64 encoded public key. url (str): Key revocation URL. Raises: AuthError: if they key has been revoked. SynapseError: if a transient error meant a key couldn't be checked for revocation. """ try: response = yield self.hs.get_simple_http_client().get_json( url, {"public_key": public_key} ) except Exception: raise SynapseError( 502, "Third party certificate could not be checked" ) if "valid" not in response or not response["valid"]: raise AuthError(403, "Third party certificate was invalid") synapse-0.24.0/synapse/handlers/groups_local.py000066400000000000000000000361371317335640100216210ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.errors import SynapseError from synapse.types import get_domain_from_id import logging logger = logging.getLogger(__name__) def _create_rerouter(func_name): """Returns a function that looks at the group id and calls the function on federation or the local group server if the group is local """ def f(self, group_id, *args, **kwargs): if self.is_mine_id(group_id): return getattr(self.groups_server_handler, func_name)( group_id, *args, **kwargs ) else: destination = get_domain_from_id(group_id) return getattr(self.transport_client, func_name)( destination, group_id, *args, **kwargs ) return f class GroupsLocalHandler(object): def __init__(self, hs): self.hs = hs self.store = hs.get_datastore() self.room_list_handler = hs.get_room_list_handler() self.groups_server_handler = hs.get_groups_server_handler() self.transport_client = hs.get_federation_transport_client() self.auth = hs.get_auth() self.clock = hs.get_clock() self.keyring = hs.get_keyring() self.is_mine_id = hs.is_mine_id self.signing_key = hs.config.signing_key[0] self.server_name = hs.hostname self.notifier = hs.get_notifier() self.attestations = hs.get_groups_attestation_signing() self.profile_handler = hs.get_profile_handler() # Ensure attestations get renewed hs.get_groups_attestation_renewer() # The following functions merely route the query to the local groups server # or federation depending on if the group is local or remote get_group_profile = _create_rerouter("get_group_profile") update_group_profile = _create_rerouter("update_group_profile") get_rooms_in_group = _create_rerouter("get_rooms_in_group") get_invited_users_in_group = _create_rerouter("get_invited_users_in_group") add_room_to_group = _create_rerouter("add_room_to_group") remove_room_from_group = _create_rerouter("remove_room_from_group") update_group_summary_room = _create_rerouter("update_group_summary_room") delete_group_summary_room = _create_rerouter("delete_group_summary_room") update_group_category = _create_rerouter("update_group_category") delete_group_category = _create_rerouter("delete_group_category") get_group_category = _create_rerouter("get_group_category") get_group_categories = _create_rerouter("get_group_categories") update_group_summary_user = _create_rerouter("update_group_summary_user") delete_group_summary_user = _create_rerouter("delete_group_summary_user") update_group_role = _create_rerouter("update_group_role") delete_group_role = _create_rerouter("delete_group_role") get_group_role = _create_rerouter("get_group_role") get_group_roles = _create_rerouter("get_group_roles") @defer.inlineCallbacks def get_group_summary(self, group_id, requester_user_id): """Get the group summary for a group. If the group is remote we check that the users have valid attestations. """ if self.is_mine_id(group_id): res = yield self.groups_server_handler.get_group_summary( group_id, requester_user_id ) else: res = yield self.transport_client.get_group_summary( get_domain_from_id(group_id), group_id, requester_user_id, ) group_server_name = get_domain_from_id(group_id) # Loop through the users and validate the attestations. chunk = res["users_section"]["users"] valid_users = [] for entry in chunk: g_user_id = entry["user_id"] attestation = entry.pop("attestation", {}) try: if get_domain_from_id(g_user_id) != group_server_name: yield self.attestations.verify_attestation( attestation, group_id=group_id, user_id=g_user_id, server_name=get_domain_from_id(g_user_id), ) valid_users.append(entry) except Exception as e: logger.info("Failed to verify user is in group: %s", e) res["users_section"]["users"] = valid_users res["users_section"]["users"].sort(key=lambda e: e.get("order", 0)) res["rooms_section"]["rooms"].sort(key=lambda e: e.get("order", 0)) # Add `is_publicised` flag to indicate whether the user has publicised their # membership of the group on their profile result = yield self.store.get_publicised_groups_for_user(requester_user_id) is_publicised = group_id in result res.setdefault("user", {})["is_publicised"] = is_publicised defer.returnValue(res) @defer.inlineCallbacks def create_group(self, group_id, user_id, content): """Create a group """ logger.info("Asking to create group with ID: %r", group_id) if self.is_mine_id(group_id): res = yield self.groups_server_handler.create_group( group_id, user_id, content ) local_attestation = None remote_attestation = None else: local_attestation = self.attestations.create_attestation(group_id, user_id) content["attestation"] = local_attestation content["user_profile"] = yield self.profile_handler.get_profile(user_id) res = yield self.transport_client.create_group( get_domain_from_id(group_id), group_id, user_id, content, ) remote_attestation = res["attestation"] yield self.attestations.verify_attestation( remote_attestation, group_id=group_id, user_id=user_id, server_name=get_domain_from_id(group_id), ) is_publicised = content.get("publicise", False) token = yield self.store.register_user_group_membership( group_id, user_id, membership="join", is_admin=True, local_attestation=local_attestation, remote_attestation=remote_attestation, is_publicised=is_publicised, ) self.notifier.on_new_event( "groups_key", token, users=[user_id], ) defer.returnValue(res) @defer.inlineCallbacks def get_users_in_group(self, group_id, requester_user_id): """Get users in a group """ if self.is_mine_id(group_id): res = yield self.groups_server_handler.get_users_in_group( group_id, requester_user_id ) defer.returnValue(res) group_server_name = get_domain_from_id(group_id) res = yield self.transport_client.get_users_in_group( get_domain_from_id(group_id), group_id, requester_user_id, ) chunk = res["chunk"] valid_entries = [] for entry in chunk: g_user_id = entry["user_id"] attestation = entry.pop("attestation", {}) try: if get_domain_from_id(g_user_id) != group_server_name: yield self.attestations.verify_attestation( attestation, group_id=group_id, user_id=g_user_id, server_name=get_domain_from_id(g_user_id), ) valid_entries.append(entry) except Exception as e: logger.info("Failed to verify user is in group: %s", e) res["chunk"] = valid_entries defer.returnValue(res) @defer.inlineCallbacks def join_group(self, group_id, user_id, content): """Request to join a group """ raise NotImplementedError() # TODO @defer.inlineCallbacks def accept_invite(self, group_id, user_id, content): """Accept an invite to a group """ if self.is_mine_id(group_id): yield self.groups_server_handler.accept_invite( group_id, user_id, content ) local_attestation = None remote_attestation = None else: local_attestation = self.attestations.create_attestation(group_id, user_id) content["attestation"] = local_attestation res = yield self.transport_client.accept_group_invite( get_domain_from_id(group_id), group_id, user_id, content, ) remote_attestation = res["attestation"] yield self.attestations.verify_attestation( remote_attestation, group_id=group_id, user_id=user_id, server_name=get_domain_from_id(group_id), ) # TODO: Check that the group is public and we're being added publically is_publicised = content.get("publicise", False) token = yield self.store.register_user_group_membership( group_id, user_id, membership="join", is_admin=False, local_attestation=local_attestation, remote_attestation=remote_attestation, is_publicised=is_publicised, ) self.notifier.on_new_event( "groups_key", token, users=[user_id], ) defer.returnValue({}) @defer.inlineCallbacks def invite(self, group_id, user_id, requester_user_id, config): """Invite a user to a group """ content = { "requester_user_id": requester_user_id, "config": config, } if self.is_mine_id(group_id): res = yield self.groups_server_handler.invite_to_group( group_id, user_id, requester_user_id, content, ) else: res = yield self.transport_client.invite_to_group( get_domain_from_id(group_id), group_id, user_id, requester_user_id, content, ) defer.returnValue(res) @defer.inlineCallbacks def on_invite(self, group_id, user_id, content): """One of our users were invited to a group """ # TODO: Support auto join and rejection if not self.is_mine_id(user_id): raise SynapseError(400, "User not on this server") local_profile = {} if "profile" in content: if "name" in content["profile"]: local_profile["name"] = content["profile"]["name"] if "avatar_url" in content["profile"]: local_profile["avatar_url"] = content["profile"]["avatar_url"] token = yield self.store.register_user_group_membership( group_id, user_id, membership="invite", content={"profile": local_profile, "inviter": content["inviter"]}, ) self.notifier.on_new_event( "groups_key", token, users=[user_id], ) try: user_profile = yield self.profile_handler.get_profile(user_id) except Exception as e: logger.warn("No profile for user %s: %s", user_id, e) user_profile = {} defer.returnValue({"state": "invite", "user_profile": user_profile}) @defer.inlineCallbacks def remove_user_from_group(self, group_id, user_id, requester_user_id, content): """Remove a user from a group """ if user_id == requester_user_id: token = yield self.store.register_user_group_membership( group_id, user_id, membership="leave", ) self.notifier.on_new_event( "groups_key", token, users=[user_id], ) # TODO: Should probably remember that we tried to leave so that we can # retry if the group server is currently down. if self.is_mine_id(group_id): res = yield self.groups_server_handler.remove_user_from_group( group_id, user_id, requester_user_id, content, ) else: content["requester_user_id"] = requester_user_id res = yield self.transport_client.remove_user_from_group( get_domain_from_id(group_id), group_id, requester_user_id, user_id, content, ) defer.returnValue(res) @defer.inlineCallbacks def user_removed_from_group(self, group_id, user_id, content): """One of our users was removed/kicked from a group """ # TODO: Check if user in group token = yield self.store.register_user_group_membership( group_id, user_id, membership="leave", ) self.notifier.on_new_event( "groups_key", token, users=[user_id], ) @defer.inlineCallbacks def get_joined_groups(self, user_id): group_ids = yield self.store.get_joined_groups(user_id) defer.returnValue({"groups": group_ids}) @defer.inlineCallbacks def get_publicised_groups_for_user(self, user_id): if self.hs.is_mine_id(user_id): result = yield self.store.get_publicised_groups_for_user(user_id) defer.returnValue({"groups": result}) else: result = yield self.transport_client.get_publicised_groups_for_user( get_domain_from_id(user_id), user_id ) # TODO: Verify attestations defer.returnValue(result) @defer.inlineCallbacks def bulk_get_publicised_groups(self, user_ids, proxy=True): destinations = {} local_users = set() for user_id in user_ids: if self.hs.is_mine_id(user_id): local_users.add(user_id) else: destinations.setdefault( get_domain_from_id(user_id), set() ).add(user_id) if not proxy and destinations: raise SynapseError(400, "Some user_ids are not local") results = {} failed_results = [] for destination, dest_user_ids in destinations.iteritems(): try: r = yield self.transport_client.bulk_get_publicised_groups( destination, list(dest_user_ids), ) results.update(r["users"]) except Exception: failed_results.extend(dest_user_ids) for uid in local_users: results[uid] = yield self.store.get_publicised_groups_for_user( uid ) defer.returnValue({"users": results}) synapse-0.24.0/synapse/handlers/identity.py000066400000000000000000000157231317335640100207570ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Utilities for interacting with Identity Servers""" from twisted.internet import defer from synapse.api.errors import ( MatrixCodeMessageException, CodeMessageException ) from ._base import BaseHandler from synapse.util.async import run_on_reactor from synapse.api.errors import SynapseError, Codes import json import logging logger = logging.getLogger(__name__) class IdentityHandler(BaseHandler): def __init__(self, hs): super(IdentityHandler, self).__init__(hs) self.http_client = hs.get_simple_http_client() self.trusted_id_servers = set(hs.config.trusted_third_party_id_servers) self.trust_any_id_server_just_for_testing_do_not_use = ( hs.config.use_insecure_ssl_client_just_for_testing_do_not_use ) def _should_trust_id_server(self, id_server): if id_server not in self.trusted_id_servers: if self.trust_any_id_server_just_for_testing_do_not_use: logger.warn( "Trusting untrustworthy ID server %r even though it isn't" " in the trusted id list for testing because" " 'use_insecure_ssl_client_just_for_testing_do_not_use'" " is set in the config", id_server, ) else: return False return True @defer.inlineCallbacks def threepid_from_creds(self, creds): yield run_on_reactor() if 'id_server' in creds: id_server = creds['id_server'] elif 'idServer' in creds: id_server = creds['idServer'] else: raise SynapseError(400, "No id_server in creds") if 'client_secret' in creds: client_secret = creds['client_secret'] elif 'clientSecret' in creds: client_secret = creds['clientSecret'] else: raise SynapseError(400, "No client_secret in creds") if not self._should_trust_id_server(id_server): logger.warn( '%s is not a trusted ID server: rejecting 3pid ' + 'credentials', id_server ) defer.returnValue(None) data = {} try: data = yield self.http_client.get_json( "https://%s%s" % ( id_server, "/_matrix/identity/api/v1/3pid/getValidated3pid" ), {'sid': creds['sid'], 'client_secret': client_secret} ) except MatrixCodeMessageException as e: logger.info("getValidated3pid failed with Matrix error: %r", e) raise SynapseError(e.code, e.msg, e.errcode) except CodeMessageException as e: data = json.loads(e.msg) if 'medium' in data: defer.returnValue(data) defer.returnValue(None) @defer.inlineCallbacks def bind_threepid(self, creds, mxid): yield run_on_reactor() logger.debug("binding threepid %r to %s", creds, mxid) data = None if 'id_server' in creds: id_server = creds['id_server'] elif 'idServer' in creds: id_server = creds['idServer'] else: raise SynapseError(400, "No id_server in creds") if 'client_secret' in creds: client_secret = creds['client_secret'] elif 'clientSecret' in creds: client_secret = creds['clientSecret'] else: raise SynapseError(400, "No client_secret in creds") try: data = yield self.http_client.post_urlencoded_get_json( "https://%s%s" % ( id_server, "/_matrix/identity/api/v1/3pid/bind" ), { 'sid': creds['sid'], 'client_secret': client_secret, 'mxid': mxid, } ) logger.debug("bound threepid %r to %s", creds, mxid) except CodeMessageException as e: data = json.loads(e.msg) defer.returnValue(data) @defer.inlineCallbacks def requestEmailToken(self, id_server, email, client_secret, send_attempt, **kwargs): yield run_on_reactor() if not self._should_trust_id_server(id_server): raise SynapseError( 400, "Untrusted ID server '%s'" % id_server, Codes.SERVER_NOT_TRUSTED ) params = { 'email': email, 'client_secret': client_secret, 'send_attempt': send_attempt, } params.update(kwargs) try: data = yield self.http_client.post_json_get_json( "https://%s%s" % ( id_server, "/_matrix/identity/api/v1/validate/email/requestToken" ), params ) defer.returnValue(data) except MatrixCodeMessageException as e: logger.info("Proxied requestToken failed with Matrix error: %r", e) raise SynapseError(e.code, e.msg, e.errcode) except CodeMessageException as e: logger.info("Proxied requestToken failed: %r", e) raise e @defer.inlineCallbacks def requestMsisdnToken( self, id_server, country, phone_number, client_secret, send_attempt, **kwargs ): yield run_on_reactor() if not self._should_trust_id_server(id_server): raise SynapseError( 400, "Untrusted ID server '%s'" % id_server, Codes.SERVER_NOT_TRUSTED ) params = { 'country': country, 'phone_number': phone_number, 'client_secret': client_secret, 'send_attempt': send_attempt, } params.update(kwargs) try: data = yield self.http_client.post_json_get_json( "https://%s%s" % ( id_server, "/_matrix/identity/api/v1/validate/msisdn/requestToken" ), params ) defer.returnValue(data) except MatrixCodeMessageException as e: logger.info("Proxied requestToken failed with Matrix error: %r", e) raise SynapseError(e.code, e.msg, e.errcode) except CodeMessageException as e: logger.info("Proxied requestToken failed: %r", e) raise e synapse-0.24.0/synapse/handlers/initial_sync.py000066400000000000000000000400751317335640100216110ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.constants import EventTypes, Membership from synapse.api.errors import AuthError, Codes from synapse.events.utils import serialize_event from synapse.events.validator import EventValidator from synapse.handlers.presence import format_user_presence_state from synapse.streams.config import PaginationConfig from synapse.types import ( UserID, StreamToken, ) from synapse.util import unwrapFirstError from synapse.util.async import concurrently_execute from synapse.util.caches.snapshot_cache import SnapshotCache from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred from synapse.visibility import filter_events_for_client from ._base import BaseHandler import logging logger = logging.getLogger(__name__) class InitialSyncHandler(BaseHandler): def __init__(self, hs): super(InitialSyncHandler, self).__init__(hs) self.hs = hs self.state = hs.get_state_handler() self.clock = hs.get_clock() self.validator = EventValidator() self.snapshot_cache = SnapshotCache() def snapshot_all_rooms(self, user_id=None, pagin_config=None, as_client_event=True, include_archived=False): """Retrieve a snapshot of all rooms the user is invited or has joined. This snapshot may include messages for all rooms where the user is joined, depending on the pagination config. Args: user_id (str): The ID of the user making the request. pagin_config (synapse.api.streams.PaginationConfig): The pagination config used to determine how many messages *PER ROOM* to return. as_client_event (bool): True to get events in client-server format. include_archived (bool): True to get rooms that the user has left Returns: A list of dicts with "room_id" and "membership" keys for all rooms the user is currently invited or joined in on. Rooms where the user is joined on, may return a "messages" key with messages, depending on the specified PaginationConfig. """ key = ( user_id, pagin_config.from_token, pagin_config.to_token, pagin_config.direction, pagin_config.limit, as_client_event, include_archived, ) now_ms = self.clock.time_msec() result = self.snapshot_cache.get(now_ms, key) if result is not None: return result return self.snapshot_cache.set(now_ms, key, self._snapshot_all_rooms( user_id, pagin_config, as_client_event, include_archived )) @defer.inlineCallbacks def _snapshot_all_rooms(self, user_id=None, pagin_config=None, as_client_event=True, include_archived=False): memberships = [Membership.INVITE, Membership.JOIN] if include_archived: memberships.append(Membership.LEAVE) room_list = yield self.store.get_rooms_for_user_where_membership_is( user_id=user_id, membership_list=memberships ) user = UserID.from_string(user_id) rooms_ret = [] now_token = yield self.hs.get_event_sources().get_current_token() presence_stream = self.hs.get_event_sources().sources["presence"] pagination_config = PaginationConfig(from_token=now_token) presence, _ = yield presence_stream.get_pagination_rows( user, pagination_config.get_source_config("presence"), None ) receipt_stream = self.hs.get_event_sources().sources["receipt"] receipt, _ = yield receipt_stream.get_pagination_rows( user, pagination_config.get_source_config("receipt"), None ) tags_by_room = yield self.store.get_tags_for_user(user_id) account_data, account_data_by_room = ( yield self.store.get_account_data_for_user(user_id) ) public_room_ids = yield self.store.get_public_room_ids() limit = pagin_config.limit if limit is None: limit = 10 @defer.inlineCallbacks def handle_room(event): d = { "room_id": event.room_id, "membership": event.membership, "visibility": ( "public" if event.room_id in public_room_ids else "private" ), } if event.membership == Membership.INVITE: time_now = self.clock.time_msec() d["inviter"] = event.sender invite_event = yield self.store.get_event(event.event_id) d["invite"] = serialize_event(invite_event, time_now, as_client_event) rooms_ret.append(d) if event.membership not in (Membership.JOIN, Membership.LEAVE): return try: if event.membership == Membership.JOIN: room_end_token = now_token.room_key deferred_room_state = self.state_handler.get_current_state( event.room_id ) elif event.membership == Membership.LEAVE: room_end_token = "s%d" % (event.stream_ordering,) deferred_room_state = self.store.get_state_for_events( [event.event_id], None ) deferred_room_state.addCallback( lambda states: states[event.event_id] ) (messages, token), current_state = yield preserve_context_over_deferred( defer.gatherResults( [ preserve_fn(self.store.get_recent_events_for_room)( event.room_id, limit=limit, end_token=room_end_token, ), deferred_room_state, ] ) ).addErrback(unwrapFirstError) messages = yield filter_events_for_client( self.store, user_id, messages ) start_token = now_token.copy_and_replace("room_key", token[0]) end_token = now_token.copy_and_replace("room_key", token[1]) time_now = self.clock.time_msec() d["messages"] = { "chunk": [ serialize_event(m, time_now, as_client_event) for m in messages ], "start": start_token.to_string(), "end": end_token.to_string(), } d["state"] = [ serialize_event(c, time_now, as_client_event) for c in current_state.values() ] account_data_events = [] tags = tags_by_room.get(event.room_id) if tags: account_data_events.append({ "type": "m.tag", "content": {"tags": tags}, }) account_data = account_data_by_room.get(event.room_id, {}) for account_data_type, content in account_data.items(): account_data_events.append({ "type": account_data_type, "content": content, }) d["account_data"] = account_data_events except: logger.exception("Failed to get snapshot") yield concurrently_execute(handle_room, room_list, 10) account_data_events = [] for account_data_type, content in account_data.items(): account_data_events.append({ "type": account_data_type, "content": content, }) now = self.clock.time_msec() ret = { "rooms": rooms_ret, "presence": [ { "type": "m.presence", "content": format_user_presence_state(event, now), } for event in presence ], "account_data": account_data_events, "receipts": receipt, "end": now_token.to_string(), } defer.returnValue(ret) @defer.inlineCallbacks def room_initial_sync(self, requester, room_id, pagin_config=None): """Capture the a snapshot of a room. If user is currently a member of the room this will be what is currently in the room. If the user left the room this will be what was in the room when they left. Args: requester(Requester): The user to get a snapshot for. room_id(str): The room to get a snapshot of. pagin_config(synapse.streams.config.PaginationConfig): The pagination config used to determine how many messages to return. Raises: AuthError if the user wasn't in the room. Returns: A JSON serialisable dict with the snapshot of the room. """ user_id = requester.user.to_string() membership, member_event_id = yield self._check_in_room_or_world_readable( room_id, user_id, ) is_peeking = member_event_id is None if membership == Membership.JOIN: result = yield self._room_initial_sync_joined( user_id, room_id, pagin_config, membership, is_peeking ) elif membership == Membership.LEAVE: result = yield self._room_initial_sync_parted( user_id, room_id, pagin_config, membership, member_event_id, is_peeking ) account_data_events = [] tags = yield self.store.get_tags_for_room(user_id, room_id) if tags: account_data_events.append({ "type": "m.tag", "content": {"tags": tags}, }) account_data = yield self.store.get_account_data_for_room(user_id, room_id) for account_data_type, content in account_data.items(): account_data_events.append({ "type": account_data_type, "content": content, }) result["account_data"] = account_data_events defer.returnValue(result) @defer.inlineCallbacks def _room_initial_sync_parted(self, user_id, room_id, pagin_config, membership, member_event_id, is_peeking): room_state = yield self.store.get_state_for_events( [member_event_id], None ) room_state = room_state[member_event_id] limit = pagin_config.limit if pagin_config else None if limit is None: limit = 10 stream_token = yield self.store.get_stream_token_for_event( member_event_id ) messages, token = yield self.store.get_recent_events_for_room( room_id, limit=limit, end_token=stream_token ) messages = yield filter_events_for_client( self.store, user_id, messages, is_peeking=is_peeking ) start_token = StreamToken.START.copy_and_replace("room_key", token[0]) end_token = StreamToken.START.copy_and_replace("room_key", token[1]) time_now = self.clock.time_msec() defer.returnValue({ "membership": membership, "room_id": room_id, "messages": { "chunk": [serialize_event(m, time_now) for m in messages], "start": start_token.to_string(), "end": end_token.to_string(), }, "state": [serialize_event(s, time_now) for s in room_state.values()], "presence": [], "receipts": [], }) @defer.inlineCallbacks def _room_initial_sync_joined(self, user_id, room_id, pagin_config, membership, is_peeking): current_state = yield self.state.get_current_state( room_id=room_id, ) # TODO: These concurrently time_now = self.clock.time_msec() state = [ serialize_event(x, time_now) for x in current_state.values() ] now_token = yield self.hs.get_event_sources().get_current_token() limit = pagin_config.limit if pagin_config else None if limit is None: limit = 10 room_members = [ m for m in current_state.values() if m.type == EventTypes.Member and m.content["membership"] == Membership.JOIN ] presence_handler = self.hs.get_presence_handler() @defer.inlineCallbacks def get_presence(): states = yield presence_handler.get_states( [m.user_id for m in room_members], as_event=True, ) defer.returnValue(states) @defer.inlineCallbacks def get_receipts(): receipts = yield self.store.get_linearized_receipts_for_room( room_id, to_key=now_token.receipt_key, ) if not receipts: receipts = [] defer.returnValue(receipts) presence, receipts, (messages, token) = yield defer.gatherResults( [ preserve_fn(get_presence)(), preserve_fn(get_receipts)(), preserve_fn(self.store.get_recent_events_for_room)( room_id, limit=limit, end_token=now_token.room_key, ) ], consumeErrors=True, ).addErrback(unwrapFirstError) messages = yield filter_events_for_client( self.store, user_id, messages, is_peeking=is_peeking, ) start_token = now_token.copy_and_replace("room_key", token[0]) end_token = now_token.copy_and_replace("room_key", token[1]) time_now = self.clock.time_msec() ret = { "room_id": room_id, "messages": { "chunk": [serialize_event(m, time_now) for m in messages], "start": start_token.to_string(), "end": end_token.to_string(), }, "state": state, "presence": presence, "receipts": receipts, } if not is_peeking: ret["membership"] = membership defer.returnValue(ret) @defer.inlineCallbacks def _check_in_room_or_world_readable(self, room_id, user_id): try: # check_user_was_in_room will return the most recent membership # event for the user if: # * The user is a non-guest user, and was ever in the room # * The user is a guest user, and has joined the room # else it will throw. member_event = yield self.auth.check_user_was_in_room(room_id, user_id) defer.returnValue((member_event.membership, member_event.event_id)) return except AuthError: visibility = yield self.state_handler.get_current_state( room_id, EventTypes.RoomHistoryVisibility, "" ) if ( visibility and visibility.content["history_visibility"] == "world_readable" ): defer.returnValue((Membership.JOIN, None)) return raise AuthError( 403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN ) synapse-0.24.0/synapse/handlers/message.py000066400000000000000000000607431317335640100205540ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.constants import EventTypes, Membership from synapse.api.errors import AuthError, Codes, SynapseError from synapse.crypto.event_signing import add_hashes_and_signatures from synapse.events.utils import serialize_event from synapse.events.validator import EventValidator from synapse.types import ( UserID, RoomAlias, RoomStreamToken, ) from synapse.util.async import run_on_reactor, ReadWriteLock, Limiter from synapse.util.logcontext import preserve_fn from synapse.util.metrics import measure_func from synapse.util.frozenutils import unfreeze from synapse.visibility import filter_events_for_client from ._base import BaseHandler from canonicaljson import encode_canonical_json import logging import random import ujson logger = logging.getLogger(__name__) class MessageHandler(BaseHandler): def __init__(self, hs): super(MessageHandler, self).__init__(hs) self.hs = hs self.state = hs.get_state_handler() self.clock = hs.get_clock() self.validator = EventValidator() self.profile_handler = hs.get_profile_handler() self.pagination_lock = ReadWriteLock() self.pusher_pool = hs.get_pusherpool() # We arbitrarily limit concurrent event creation for a room to 5. # This is to stop us from diverging history *too* much. self.limiter = Limiter(max_count=5) self.action_generator = hs.get_action_generator() self.spam_checker = hs.get_spam_checker() @defer.inlineCallbacks def purge_history(self, room_id, event_id): event = yield self.store.get_event(event_id) if event.room_id != room_id: raise SynapseError(400, "Event is for wrong room.") depth = event.depth with (yield self.pagination_lock.write(room_id)): yield self.store.delete_old_state(room_id, depth) @defer.inlineCallbacks def get_messages(self, requester, room_id=None, pagin_config=None, as_client_event=True, event_filter=None): """Get messages in a room. Args: requester (Requester): The user requesting messages. room_id (str): The room they want messages from. pagin_config (synapse.api.streams.PaginationConfig): The pagination config rules to apply, if any. as_client_event (bool): True to get events in client-server format. event_filter (Filter): Filter to apply to results or None Returns: dict: Pagination API results """ user_id = requester.user.to_string() if pagin_config.from_token: room_token = pagin_config.from_token.room_key else: pagin_config.from_token = ( yield self.hs.get_event_sources().get_current_token_for_room( room_id=room_id ) ) room_token = pagin_config.from_token.room_key room_token = RoomStreamToken.parse(room_token) pagin_config.from_token = pagin_config.from_token.copy_and_replace( "room_key", str(room_token) ) source_config = pagin_config.get_source_config("room") with (yield self.pagination_lock.read(room_id)): membership, member_event_id = yield self._check_in_room_or_world_readable( room_id, user_id ) if source_config.direction == 'b': # if we're going backwards, we might need to backfill. This # requires that we have a topo token. if room_token.topological: max_topo = room_token.topological else: max_topo = yield self.store.get_max_topological_token( room_id, room_token.stream ) if membership == Membership.LEAVE: # If they have left the room then clamp the token to be before # they left the room, to save the effort of loading from the # database. leave_token = yield self.store.get_topological_token_for_event( member_event_id ) leave_token = RoomStreamToken.parse(leave_token) if leave_token.topological < max_topo: source_config.from_key = str(leave_token) yield self.hs.get_handlers().federation_handler.maybe_backfill( room_id, max_topo ) events, next_key = yield self.store.paginate_room_events( room_id=room_id, from_key=source_config.from_key, to_key=source_config.to_key, direction=source_config.direction, limit=source_config.limit, event_filter=event_filter, ) next_token = pagin_config.from_token.copy_and_replace( "room_key", next_key ) if not events: defer.returnValue({ "chunk": [], "start": pagin_config.from_token.to_string(), "end": next_token.to_string(), }) if event_filter: events = event_filter.filter(events) events = yield filter_events_for_client( self.store, user_id, events, is_peeking=(member_event_id is None), ) time_now = self.clock.time_msec() chunk = { "chunk": [ serialize_event(e, time_now, as_client_event) for e in events ], "start": pagin_config.from_token.to_string(), "end": next_token.to_string(), } defer.returnValue(chunk) @defer.inlineCallbacks def create_event(self, requester, event_dict, token_id=None, txn_id=None, prev_event_ids=None): """ Given a dict from a client, create a new event. Creates an FrozenEvent object, filling out auth_events, prev_events, etc. Adds display names to Join membership events. Args: requester event_dict (dict): An entire event token_id (str) txn_id (str) prev_event_ids (list): The prev event ids to use when creating the event Returns: Tuple of created event (FrozenEvent), Context """ builder = self.event_builder_factory.new(event_dict) with (yield self.limiter.queue(builder.room_id)): self.validator.validate_new(builder) if builder.type == EventTypes.Member: membership = builder.content.get("membership", None) target = UserID.from_string(builder.state_key) if membership in {Membership.JOIN, Membership.INVITE}: # If event doesn't include a display name, add one. profile = self.profile_handler content = builder.content try: if "displayname" not in content: content["displayname"] = yield profile.get_displayname(target) if "avatar_url" not in content: content["avatar_url"] = yield profile.get_avatar_url(target) except Exception as e: logger.info( "Failed to get profile information for %r: %s", target, e ) if token_id is not None: builder.internal_metadata.token_id = token_id if txn_id is not None: builder.internal_metadata.txn_id = txn_id event, context = yield self._create_new_client_event( builder=builder, requester=requester, prev_event_ids=prev_event_ids, ) defer.returnValue((event, context)) @defer.inlineCallbacks def send_nonmember_event(self, requester, event, context, ratelimit=True): """ Persists and notifies local clients and federation of an event. Args: event (FrozenEvent) the event to send. context (Context) the context of the event. ratelimit (bool): Whether to rate limit this send. is_guest (bool): Whether the sender is a guest. """ if event.type == EventTypes.Member: raise SynapseError( 500, "Tried to send member event through non-member codepath" ) # We check here if we are currently being rate limited, so that we # don't do unnecessary work. We check again just before we actually # send the event. yield self.ratelimit(requester, update=False) user = UserID.from_string(event.sender) assert self.hs.is_mine(user), "User must be our own: %s" % (user,) if event.is_state(): prev_state = yield self.deduplicate_state_event(event, context) if prev_state is not None: defer.returnValue(prev_state) yield self.handle_new_client_event( requester=requester, event=event, context=context, ratelimit=ratelimit, ) if event.type == EventTypes.Message: presence = self.hs.get_presence_handler() # We don't want to block sending messages on any presence code. This # matters as sometimes presence code can take a while. preserve_fn(presence.bump_presence_active_time)(user) @defer.inlineCallbacks def deduplicate_state_event(self, event, context): """ Checks whether event is in the latest resolved state in context. If so, returns the version of the event in context. Otherwise, returns None. """ prev_event_id = context.prev_state_ids.get((event.type, event.state_key)) prev_event = yield self.store.get_event(prev_event_id, allow_none=True) if not prev_event: return if prev_event and event.user_id == prev_event.user_id: prev_content = encode_canonical_json(prev_event.content) next_content = encode_canonical_json(event.content) if prev_content == next_content: defer.returnValue(prev_event) return @defer.inlineCallbacks def create_and_send_nonmember_event( self, requester, event_dict, ratelimit=True, txn_id=None ): """ Creates an event, then sends it. See self.create_event and self.send_nonmember_event. """ event, context = yield self.create_event( requester, event_dict, token_id=requester.access_token_id, txn_id=txn_id ) spam_error = self.spam_checker.check_event_for_spam(event) if spam_error: if not isinstance(spam_error, basestring): spam_error = "Spam is not permitted here" raise SynapseError( 403, spam_error, Codes.FORBIDDEN ) yield self.send_nonmember_event( requester, event, context, ratelimit=ratelimit, ) defer.returnValue(event) @defer.inlineCallbacks def get_room_data(self, user_id=None, room_id=None, event_type=None, state_key="", is_guest=False): """ Get data from a room. Args: event : The room path event Returns: The path data content. Raises: SynapseError if something went wrong. """ membership, membership_event_id = yield self._check_in_room_or_world_readable( room_id, user_id ) if membership == Membership.JOIN: data = yield self.state_handler.get_current_state( room_id, event_type, state_key ) elif membership == Membership.LEAVE: key = (event_type, state_key) room_state = yield self.store.get_state_for_events( [membership_event_id], [key] ) data = room_state[membership_event_id].get(key) defer.returnValue(data) @defer.inlineCallbacks def _check_in_room_or_world_readable(self, room_id, user_id): try: # check_user_was_in_room will return the most recent membership # event for the user if: # * The user is a non-guest user, and was ever in the room # * The user is a guest user, and has joined the room # else it will throw. member_event = yield self.auth.check_user_was_in_room(room_id, user_id) defer.returnValue((member_event.membership, member_event.event_id)) return except AuthError: visibility = yield self.state_handler.get_current_state( room_id, EventTypes.RoomHistoryVisibility, "" ) if ( visibility and visibility.content["history_visibility"] == "world_readable" ): defer.returnValue((Membership.JOIN, None)) return raise AuthError( 403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN ) @defer.inlineCallbacks def get_state_events(self, user_id, room_id, is_guest=False): """Retrieve all state events for a given room. If the user is joined to the room then return the current state. If the user has left the room return the state events from when they left. Args: user_id(str): The user requesting state events. room_id(str): The room ID to get all state events from. Returns: A list of dicts representing state events. [{}, {}, {}] """ membership, membership_event_id = yield self._check_in_room_or_world_readable( room_id, user_id ) if membership == Membership.JOIN: room_state = yield self.state_handler.get_current_state(room_id) elif membership == Membership.LEAVE: room_state = yield self.store.get_state_for_events( [membership_event_id], None ) room_state = room_state[membership_event_id] now = self.clock.time_msec() defer.returnValue( [serialize_event(c, now) for c in room_state.values()] ) @defer.inlineCallbacks def get_joined_members(self, requester, room_id): """Get all the joined members in the room and their profile information. If the user has left the room return the state events from when they left. Args: requester(Requester): The user requesting state events. room_id(str): The room ID to get all state events from. Returns: A dict of user_id to profile info """ user_id = requester.user.to_string() if not requester.app_service: # We check AS auth after fetching the room membership, as it # requires us to pull out all joined members anyway. membership, _ = yield self._check_in_room_or_world_readable( room_id, user_id ) if membership != Membership.JOIN: raise NotImplementedError( "Getting joined members after leaving is not implemented" ) users_with_profile = yield self.state.get_current_user_in_room(room_id) # If this is an AS, double check that they are allowed to see the members. # This can either be because the AS user is in the room or becuase there # is a user in the room that the AS is "interested in" if requester.app_service and user_id not in users_with_profile: for uid in users_with_profile: if requester.app_service.is_interested_in_user(uid): break else: # Loop fell through, AS has no interested users in room raise AuthError(403, "Appservice not in room") defer.returnValue({ user_id: { "avatar_url": profile.avatar_url, "display_name": profile.display_name, } for user_id, profile in users_with_profile.iteritems() }) @measure_func("_create_new_client_event") @defer.inlineCallbacks def _create_new_client_event(self, builder, requester=None, prev_event_ids=None): if prev_event_ids: prev_events = yield self.store.add_event_hashes(prev_event_ids) prev_max_depth = yield self.store.get_max_depth_of_events(prev_event_ids) depth = prev_max_depth + 1 else: latest_ret = yield self.store.get_latest_event_ids_and_hashes_in_room( builder.room_id, ) # We want to limit the max number of prev events we point to in our # new event if len(latest_ret) > 10: # Sort by reverse depth, so we point to the most recent. latest_ret.sort(key=lambda a: -a[2]) new_latest_ret = latest_ret[:5] # We also randomly point to some of the older events, to make # sure that we don't completely ignore the older events. if latest_ret[5:]: sample_size = min(5, len(latest_ret[5:])) new_latest_ret.extend(random.sample(latest_ret[5:], sample_size)) latest_ret = new_latest_ret if latest_ret: depth = max([d for _, _, d in latest_ret]) + 1 else: depth = 1 prev_events = [ (event_id, prev_hashes) for event_id, prev_hashes, _ in latest_ret ] builder.prev_events = prev_events builder.depth = depth state_handler = self.state_handler context = yield state_handler.compute_event_context(builder) if requester: context.app_service = requester.app_service if builder.is_state(): builder.prev_state = yield self.store.add_event_hashes( context.prev_state_events ) yield self.auth.add_auth_events(builder, context) signing_key = self.hs.config.signing_key[0] add_hashes_and_signatures( builder, self.server_name, signing_key ) event = builder.build() logger.debug( "Created event %s with state: %s", event.event_id, context.prev_state_ids, ) defer.returnValue( (event, context,) ) @measure_func("handle_new_client_event") @defer.inlineCallbacks def handle_new_client_event( self, requester, event, context, ratelimit=True, extra_users=[] ): # We now need to go and hit out to wherever we need to hit out to. if ratelimit: yield self.ratelimit(requester) try: yield self.auth.check_from_context(event, context) except AuthError as err: logger.warn("Denying new event %r because %s", event, err) raise err # Ensure that we can round trip before trying to persist in db try: dump = ujson.dumps(unfreeze(event.content)) ujson.loads(dump) except: logger.exception("Failed to encode content: %r", event.content) raise yield self.maybe_kick_guest_users(event, context) if event.type == EventTypes.CanonicalAlias: # Check the alias is acually valid (at this time at least) room_alias_str = event.content.get("alias", None) if room_alias_str: room_alias = RoomAlias.from_string(room_alias_str) directory_handler = self.hs.get_handlers().directory_handler mapping = yield directory_handler.get_association(room_alias) if mapping["room_id"] != event.room_id: raise SynapseError( 400, "Room alias %s does not point to the room" % ( room_alias_str, ) ) federation_handler = self.hs.get_handlers().federation_handler if event.type == EventTypes.Member: if event.content["membership"] == Membership.INVITE: def is_inviter_member_event(e): return ( e.type == EventTypes.Member and e.sender == event.sender ) state_to_include_ids = [ e_id for k, e_id in context.current_state_ids.iteritems() if k[0] in self.hs.config.room_invite_state_types or k == (EventTypes.Member, event.sender) ] state_to_include = yield self.store.get_events(state_to_include_ids) event.unsigned["invite_room_state"] = [ { "type": e.type, "state_key": e.state_key, "content": e.content, "sender": e.sender, } for e in state_to_include.itervalues() ] invitee = UserID.from_string(event.state_key) if not self.hs.is_mine(invitee): # TODO: Can we add signature from remote server in a nicer # way? If we have been invited by a remote server, we need # to get them to sign the event. returned_invite = yield federation_handler.send_invite( invitee.domain, event, ) event.unsigned.pop("room_state", None) # TODO: Make sure the signatures actually are correct. event.signatures.update( returned_invite.signatures ) if event.type == EventTypes.Redaction: auth_events_ids = yield self.auth.compute_auth_events( event, context.prev_state_ids, for_verification=True, ) auth_events = yield self.store.get_events(auth_events_ids) auth_events = { (e.type, e.state_key): e for e in auth_events.values() } if self.auth.check_redaction(event, auth_events=auth_events): original_event = yield self.store.get_event( event.redacts, check_redacted=False, get_prev_content=False, allow_rejected=False, allow_none=False ) if event.user_id != original_event.user_id: raise AuthError( 403, "You don't have permission to redact events" ) if event.type == EventTypes.Create and context.prev_state_ids: raise AuthError( 403, "Changing the room create event is forbidden", ) yield self.action_generator.handle_push_actions_for_event( event, context ) (event_stream_id, max_stream_id) = yield self.store.persist_event( event, context=context ) # this intentionally does not yield: we don't care about the result # and don't need to wait for it. preserve_fn(self.pusher_pool.on_new_notifications)( event_stream_id, max_stream_id ) @defer.inlineCallbacks def _notify(): yield run_on_reactor() self.notifier.on_new_room_event( event, event_stream_id, max_stream_id, extra_users=extra_users ) preserve_fn(_notify)() synapse-0.24.0/synapse/handlers/presence.py000066400000000000000000001434161317335640100207330ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module is responsible for keeping track of presence status of local and remote users. The methods that define policy are: - PresenceHandler._update_states - PresenceHandler._handle_timeouts - should_notify """ from twisted.internet import defer, reactor from contextlib import contextmanager from synapse.api.errors import SynapseError from synapse.api.constants import PresenceState from synapse.storage.presence import UserPresenceState from synapse.util.caches.descriptors import cachedInlineCallbacks from synapse.util.async import Linearizer from synapse.util.logcontext import preserve_fn from synapse.util.logutils import log_function from synapse.util.metrics import Measure from synapse.util.wheel_timer import WheelTimer from synapse.types import UserID, get_domain_from_id import synapse.metrics import logging logger = logging.getLogger(__name__) metrics = synapse.metrics.get_metrics_for(__name__) notified_presence_counter = metrics.register_counter("notified_presence") federation_presence_out_counter = metrics.register_counter("federation_presence_out") presence_updates_counter = metrics.register_counter("presence_updates") timers_fired_counter = metrics.register_counter("timers_fired") federation_presence_counter = metrics.register_counter("federation_presence") bump_active_time_counter = metrics.register_counter("bump_active_time") get_updates_counter = metrics.register_counter("get_updates", labels=["type"]) notify_reason_counter = metrics.register_counter("notify_reason", labels=["reason"]) state_transition_counter = metrics.register_counter( "state_transition", labels=["from", "to"] ) # If a user was last active in the last LAST_ACTIVE_GRANULARITY, consider them # "currently_active" LAST_ACTIVE_GRANULARITY = 60 * 1000 # How long to wait until a new /events or /sync request before assuming # the client has gone. SYNC_ONLINE_TIMEOUT = 30 * 1000 # How long to wait before marking the user as idle. Compared against last active IDLE_TIMER = 5 * 60 * 1000 # How often we expect remote servers to resend us presence. FEDERATION_TIMEOUT = 30 * 60 * 1000 # How often to resend presence to remote servers FEDERATION_PING_INTERVAL = 25 * 60 * 1000 # How long we will wait before assuming that the syncs from an external process # are dead. EXTERNAL_PROCESS_EXPIRY = 5 * 60 * 1000 assert LAST_ACTIVE_GRANULARITY < IDLE_TIMER class PresenceHandler(object): def __init__(self, hs): self.is_mine = hs.is_mine self.is_mine_id = hs.is_mine_id self.clock = hs.get_clock() self.store = hs.get_datastore() self.wheel_timer = WheelTimer() self.notifier = hs.get_notifier() self.replication = hs.get_replication_layer() self.federation = hs.get_federation_sender() self.state = hs.get_state_handler() self.replication.register_edu_handler( "m.presence", self.incoming_presence ) self.replication.register_edu_handler( "m.presence_invite", lambda origin, content: self.invite_presence( observed_user=UserID.from_string(content["observed_user"]), observer_user=UserID.from_string(content["observer_user"]), ) ) self.replication.register_edu_handler( "m.presence_accept", lambda origin, content: self.accept_presence( observed_user=UserID.from_string(content["observed_user"]), observer_user=UserID.from_string(content["observer_user"]), ) ) self.replication.register_edu_handler( "m.presence_deny", lambda origin, content: self.deny_presence( observed_user=UserID.from_string(content["observed_user"]), observer_user=UserID.from_string(content["observer_user"]), ) ) distributor = hs.get_distributor() distributor.observe("user_joined_room", self.user_joined_room) active_presence = self.store.take_presence_startup_info() # A dictionary of the current state of users. This is prefilled with # non-offline presence from the DB. We should fetch from the DB if # we can't find a users presence in here. self.user_to_current_state = { state.user_id: state for state in active_presence } metrics.register_callback( "user_to_current_state_size", lambda: len(self.user_to_current_state) ) now = self.clock.time_msec() for state in active_presence: self.wheel_timer.insert( now=now, obj=state.user_id, then=state.last_active_ts + IDLE_TIMER, ) self.wheel_timer.insert( now=now, obj=state.user_id, then=state.last_user_sync_ts + SYNC_ONLINE_TIMEOUT, ) if self.is_mine_id(state.user_id): self.wheel_timer.insert( now=now, obj=state.user_id, then=state.last_federation_update_ts + FEDERATION_PING_INTERVAL, ) else: self.wheel_timer.insert( now=now, obj=state.user_id, then=state.last_federation_update_ts + FEDERATION_TIMEOUT, ) # Set of users who have presence in the `user_to_current_state` that # have not yet been persisted self.unpersisted_users_changes = set() reactor.addSystemEventTrigger("before", "shutdown", self._on_shutdown) self.serial_to_user = {} self._next_serial = 1 # Keeps track of the number of *ongoing* syncs on this process. While # this is non zero a user will never go offline. self.user_to_num_current_syncs = {} # Keeps track of the number of *ongoing* syncs on other processes. # While any sync is ongoing on another process the user will never # go offline. # Each process has a unique identifier and an update frequency. If # no update is received from that process within the update period then # we assume that all the sync requests on that process have stopped. # Stored as a dict from process_id to set of user_id, and a dict of # process_id to millisecond timestamp last updated. self.external_process_to_current_syncs = {} self.external_process_last_updated_ms = {} self.external_sync_linearizer = Linearizer(name="external_sync_linearizer") # Start a LoopingCall in 30s that fires every 5s. # The initial delay is to allow disconnected clients a chance to # reconnect before we treat them as offline. self.clock.call_later( 30, self.clock.looping_call, self._handle_timeouts, 5000, ) self.clock.call_later( 60, self.clock.looping_call, self._persist_unpersisted_changes, 60 * 1000, ) metrics.register_callback("wheel_timer_size", lambda: len(self.wheel_timer)) @defer.inlineCallbacks def _on_shutdown(self): """Gets called when shutting down. This lets us persist any updates that we haven't yet persisted, e.g. updates that only changes some internal timers. This allows changes to persist across startup without having to persist every single change. If this does not run it simply means that some of the timers will fire earlier than they should when synapse is restarted. This affect of this is some spurious presence changes that will self-correct. """ logger.info( "Performing _on_shutdown. Persisting %d unpersisted changes", len(self.user_to_current_state) ) if self.unpersisted_users_changes: yield self.store.update_presence([ self.user_to_current_state[user_id] for user_id in self.unpersisted_users_changes ]) logger.info("Finished _on_shutdown") @defer.inlineCallbacks def _persist_unpersisted_changes(self): """We periodically persist the unpersisted changes, as otherwise they may stack up and slow down shutdown times. """ logger.info( "Performing _persist_unpersisted_changes. Persisting %d unpersisted changes", len(self.unpersisted_users_changes) ) unpersisted = self.unpersisted_users_changes self.unpersisted_users_changes = set() if unpersisted: yield self.store.update_presence([ self.user_to_current_state[user_id] for user_id in unpersisted ]) logger.info("Finished _persist_unpersisted_changes") @defer.inlineCallbacks def _update_states(self, new_states): """Updates presence of users. Sets the appropriate timeouts. Pokes the notifier and federation if and only if the changed presence state should be sent to clients/servers. """ now = self.clock.time_msec() with Measure(self.clock, "presence_update_states"): # NOTE: We purposefully don't yield between now and when we've # calculated what we want to do with the new states, to avoid races. to_notify = {} # Changes we want to notify everyone about to_federation_ping = {} # These need sending keep-alives # Only bother handling the last presence change for each user new_states_dict = {} for new_state in new_states: new_states_dict[new_state.user_id] = new_state new_state = new_states_dict.values() for new_state in new_states: user_id = new_state.user_id # Its fine to not hit the database here, as the only thing not in # the current state cache are OFFLINE states, where the only field # of interest is last_active which is safe enough to assume is 0 # here. prev_state = self.user_to_current_state.get( user_id, UserPresenceState.default(user_id) ) new_state, should_notify, should_ping = handle_update( prev_state, new_state, is_mine=self.is_mine_id(user_id), wheel_timer=self.wheel_timer, now=now ) self.user_to_current_state[user_id] = new_state if should_notify: to_notify[user_id] = new_state elif should_ping: to_federation_ping[user_id] = new_state # TODO: We should probably ensure there are no races hereafter presence_updates_counter.inc_by(len(new_states)) if to_notify: notified_presence_counter.inc_by(len(to_notify)) yield self._persist_and_notify(to_notify.values()) self.unpersisted_users_changes |= set(s.user_id for s in new_states) self.unpersisted_users_changes -= set(to_notify.keys()) to_federation_ping = { user_id: state for user_id, state in to_federation_ping.items() if user_id not in to_notify } if to_federation_ping: federation_presence_out_counter.inc_by(len(to_federation_ping)) self._push_to_remotes(to_federation_ping.values()) def _handle_timeouts(self): """Checks the presence of users that have timed out and updates as appropriate. """ logger.info("Handling presence timeouts") now = self.clock.time_msec() try: with Measure(self.clock, "presence_handle_timeouts"): # Fetch the list of users that *may* have timed out. Things may have # changed since the timeout was set, so we won't necessarily have to # take any action. users_to_check = set(self.wheel_timer.fetch(now)) # Check whether the lists of syncing processes from an external # process have expired. expired_process_ids = [ process_id for process_id, last_update in self.external_process_last_updated_ms.items() if now - last_update > EXTERNAL_PROCESS_EXPIRY ] for process_id in expired_process_ids: users_to_check.update( self.external_process_last_updated_ms.pop(process_id, ()) ) self.external_process_last_update.pop(process_id) states = [ self.user_to_current_state.get( user_id, UserPresenceState.default(user_id) ) for user_id in users_to_check ] timers_fired_counter.inc_by(len(states)) changes = handle_timeouts( states, is_mine_fn=self.is_mine_id, syncing_user_ids=self.get_currently_syncing_users(), now=now, ) preserve_fn(self._update_states)(changes) except: logger.exception("Exception in _handle_timeouts loop") @defer.inlineCallbacks def bump_presence_active_time(self, user): """We've seen the user do something that indicates they're interacting with the app. """ user_id = user.to_string() bump_active_time_counter.inc() prev_state = yield self.current_state_for_user(user_id) new_fields = { "last_active_ts": self.clock.time_msec(), } if prev_state.state == PresenceState.UNAVAILABLE: new_fields["state"] = PresenceState.ONLINE yield self._update_states([prev_state.copy_and_replace(**new_fields)]) @defer.inlineCallbacks def user_syncing(self, user_id, affect_presence=True): """Returns a context manager that should surround any stream requests from the user. This allows us to keep track of who is currently streaming and who isn't without having to have timers outside of this module to avoid flickering when users disconnect/reconnect. Args: user_id (str) affect_presence (bool): If false this function will be a no-op. Useful for streams that are not associated with an actual client that is being used by a user. """ if affect_presence: curr_sync = self.user_to_num_current_syncs.get(user_id, 0) self.user_to_num_current_syncs[user_id] = curr_sync + 1 prev_state = yield self.current_state_for_user(user_id) if prev_state.state == PresenceState.OFFLINE: # If they're currently offline then bring them online, otherwise # just update the last sync times. yield self._update_states([prev_state.copy_and_replace( state=PresenceState.ONLINE, last_active_ts=self.clock.time_msec(), last_user_sync_ts=self.clock.time_msec(), )]) else: yield self._update_states([prev_state.copy_and_replace( last_user_sync_ts=self.clock.time_msec(), )]) @defer.inlineCallbacks def _end(): if affect_presence: self.user_to_num_current_syncs[user_id] -= 1 prev_state = yield self.current_state_for_user(user_id) yield self._update_states([prev_state.copy_and_replace( last_user_sync_ts=self.clock.time_msec(), )]) @contextmanager def _user_syncing(): try: yield finally: preserve_fn(_end)() defer.returnValue(_user_syncing()) def get_currently_syncing_users(self): """Get the set of user ids that are currently syncing on this HS. Returns: set(str): A set of user_id strings. """ syncing_user_ids = { user_id for user_id, count in self.user_to_num_current_syncs.items() if count } for user_ids in self.external_process_to_current_syncs.values(): syncing_user_ids.update(user_ids) return syncing_user_ids @defer.inlineCallbacks def update_external_syncs(self, process_id, syncing_user_ids): """Update the syncing users for an external process Args: process_id(str): An identifier for the process the users are syncing against. This allows synapse to process updates as user start and stop syncing against a given process. syncing_user_ids(set(str)): The set of user_ids that are currently syncing on that server. """ # Grab the previous list of user_ids that were syncing on that process prev_syncing_user_ids = ( self.external_process_to_current_syncs.get(process_id, set()) ) # Grab the current presence state for both the users that are syncing # now and the users that were syncing before this update. prev_states = yield self.current_state_for_users( syncing_user_ids | prev_syncing_user_ids ) updates = [] time_now_ms = self.clock.time_msec() # For each new user that is syncing check if we need to mark them as # being online. for new_user_id in syncing_user_ids - prev_syncing_user_ids: prev_state = prev_states[new_user_id] if prev_state.state == PresenceState.OFFLINE: updates.append(prev_state.copy_and_replace( state=PresenceState.ONLINE, last_active_ts=time_now_ms, last_user_sync_ts=time_now_ms, )) else: updates.append(prev_state.copy_and_replace( last_user_sync_ts=time_now_ms, )) # For each user that is still syncing or stopped syncing update the # last sync time so that we will correctly apply the grace period when # they stop syncing. for old_user_id in prev_syncing_user_ids: prev_state = prev_states[old_user_id] updates.append(prev_state.copy_and_replace( last_user_sync_ts=time_now_ms, )) yield self._update_states(updates) # Update the last updated time for the process. We expire the entries # if we don't receive an update in the given timeframe. self.external_process_last_updated_ms[process_id] = self.clock.time_msec() self.external_process_to_current_syncs[process_id] = syncing_user_ids @defer.inlineCallbacks def update_external_syncs_row(self, process_id, user_id, is_syncing, sync_time_msec): """Update the syncing users for an external process as a delta. Args: process_id (str): An identifier for the process the users are syncing against. This allows synapse to process updates as user start and stop syncing against a given process. user_id (str): The user who has started or stopped syncing is_syncing (bool): Whether or not the user is now syncing sync_time_msec(int): Time in ms when the user was last syncing """ with (yield self.external_sync_linearizer.queue(process_id)): prev_state = yield self.current_state_for_user(user_id) process_presence = self.external_process_to_current_syncs.setdefault( process_id, set() ) updates = [] if is_syncing and user_id not in process_presence: if prev_state.state == PresenceState.OFFLINE: updates.append(prev_state.copy_and_replace( state=PresenceState.ONLINE, last_active_ts=sync_time_msec, last_user_sync_ts=sync_time_msec, )) else: updates.append(prev_state.copy_and_replace( last_user_sync_ts=sync_time_msec, )) process_presence.add(user_id) elif user_id in process_presence: updates.append(prev_state.copy_and_replace( last_user_sync_ts=sync_time_msec, )) if not is_syncing: process_presence.discard(user_id) if updates: yield self._update_states(updates) self.external_process_last_updated_ms[process_id] = self.clock.time_msec() @defer.inlineCallbacks def update_external_syncs_clear(self, process_id): """Marks all users that had been marked as syncing by a given process as offline. Used when the process has stopped/disappeared. """ with (yield self.external_sync_linearizer.queue(process_id)): process_presence = self.external_process_to_current_syncs.pop( process_id, set() ) prev_states = yield self.current_state_for_users(process_presence) time_now_ms = self.clock.time_msec() yield self._update_states([ prev_state.copy_and_replace( last_user_sync_ts=time_now_ms, ) for prev_state in prev_states.itervalues() ]) self.external_process_last_updated_ms.pop(process_id, None) @defer.inlineCallbacks def current_state_for_user(self, user_id): """Get the current presence state for a user. """ res = yield self.current_state_for_users([user_id]) defer.returnValue(res[user_id]) @defer.inlineCallbacks def current_state_for_users(self, user_ids): """Get the current presence state for multiple users. Returns: dict: `user_id` -> `UserPresenceState` """ states = { user_id: self.user_to_current_state.get(user_id, None) for user_id in user_ids } missing = [user_id for user_id, state in states.iteritems() if not state] if missing: # There are things not in our in memory cache. Lets pull them out of # the database. res = yield self.store.get_presence_for_users(missing) states.update(res) missing = [user_id for user_id, state in states.iteritems() if not state] if missing: new = { user_id: UserPresenceState.default(user_id) for user_id in missing } states.update(new) self.user_to_current_state.update(new) defer.returnValue(states) @defer.inlineCallbacks def _persist_and_notify(self, states): """Persist states in the database, poke the notifier and send to interested remote servers """ stream_id, max_token = yield self.store.update_presence(states) parties = yield get_interested_parties(self.store, states) room_ids_to_states, users_to_states = parties self.notifier.on_new_event( "presence_key", stream_id, rooms=room_ids_to_states.keys(), users=[UserID.from_string(u) for u in users_to_states] ) self._push_to_remotes(states) @defer.inlineCallbacks def notify_for_states(self, state, stream_id): parties = yield get_interested_parties(self.store, [state]) room_ids_to_states, users_to_states = parties self.notifier.on_new_event( "presence_key", stream_id, rooms=room_ids_to_states.keys(), users=[UserID.from_string(u) for u in users_to_states] ) def _push_to_remotes(self, states): """Sends state updates to remote servers. Args: states (list(UserPresenceState)) """ self.federation.send_presence(states) @defer.inlineCallbacks def incoming_presence(self, origin, content): """Called when we receive a `m.presence` EDU from a remote server. """ now = self.clock.time_msec() updates = [] for push in content.get("push", []): # A "push" contains a list of presence that we are probably interested # in. # TODO: Actually check if we're interested, rather than blindly # accepting presence updates. user_id = push.get("user_id", None) if not user_id: logger.info( "Got presence update from %r with no 'user_id': %r", origin, push, ) continue if get_domain_from_id(user_id) != origin: logger.info( "Got presence update from %r with bad 'user_id': %r", origin, user_id, ) continue presence_state = push.get("presence", None) if not presence_state: logger.info( "Got presence update from %r with no 'presence_state': %r", origin, push, ) continue new_fields = { "state": presence_state, "last_federation_update_ts": now, } last_active_ago = push.get("last_active_ago", None) if last_active_ago is not None: new_fields["last_active_ts"] = now - last_active_ago new_fields["status_msg"] = push.get("status_msg", None) new_fields["currently_active"] = push.get("currently_active", False) prev_state = yield self.current_state_for_user(user_id) updates.append(prev_state.copy_and_replace(**new_fields)) if updates: federation_presence_counter.inc_by(len(updates)) yield self._update_states(updates) @defer.inlineCallbacks def get_state(self, target_user, as_event=False): results = yield self.get_states( [target_user.to_string()], as_event=as_event, ) defer.returnValue(results[0]) @defer.inlineCallbacks def get_states(self, target_user_ids, as_event=False): """Get the presence state for users. Args: target_user_ids (list) as_event (bool): Whether to format it as a client event or not. Returns: list """ updates = yield self.current_state_for_users(target_user_ids) updates = updates.values() for user_id in set(target_user_ids) - set(u.user_id for u in updates): updates.append(UserPresenceState.default(user_id)) now = self.clock.time_msec() if as_event: defer.returnValue([ { "type": "m.presence", "content": format_user_presence_state(state, now), } for state in updates ]) else: defer.returnValue(updates) @defer.inlineCallbacks def set_state(self, target_user, state, ignore_status_msg=False): """Set the presence state of the user. """ status_msg = state.get("status_msg", None) presence = state["presence"] valid_presence = ( PresenceState.ONLINE, PresenceState.UNAVAILABLE, PresenceState.OFFLINE ) if presence not in valid_presence: raise SynapseError(400, "Invalid presence state") user_id = target_user.to_string() prev_state = yield self.current_state_for_user(user_id) new_fields = { "state": presence } if not ignore_status_msg: msg = status_msg if presence != PresenceState.OFFLINE else None new_fields["status_msg"] = msg if presence == PresenceState.ONLINE: new_fields["last_active_ts"] = self.clock.time_msec() yield self._update_states([prev_state.copy_and_replace(**new_fields)]) @defer.inlineCallbacks def user_joined_room(self, user, room_id): """Called (via the distributor) when a user joins a room. This funciton sends presence updates to servers, either: 1. the joining user is a local user and we send their presence to all servers in the room. 2. the joining user is a remote user and so we send presence for all local users in the room. """ # We only need to send presence to servers that don't have it yet. We # don't need to send to local clients here, as that is done as part # of the event stream/sync. # TODO: Only send to servers not already in the room. if self.is_mine(user): state = yield self.current_state_for_user(user.to_string()) self._push_to_remotes([state]) else: user_ids = yield self.store.get_users_in_room(room_id) user_ids = filter(self.is_mine_id, user_ids) states = yield self.current_state_for_users(user_ids) self._push_to_remotes(states.values()) @defer.inlineCallbacks def get_presence_list(self, observer_user, accepted=None): """Returns the presence for all users in their presence list. """ if not self.is_mine(observer_user): raise SynapseError(400, "User is not hosted on this Home Server") presence_list = yield self.store.get_presence_list( observer_user.localpart, accepted=accepted ) results = yield self.get_states( target_user_ids=[row["observed_user_id"] for row in presence_list], as_event=False, ) now = self.clock.time_msec() results[:] = [format_user_presence_state(r, now) for r in results] is_accepted = { row["observed_user_id"]: row["accepted"] for row in presence_list } for result in results: result.update({ "accepted": is_accepted, }) defer.returnValue(results) @defer.inlineCallbacks def send_presence_invite(self, observer_user, observed_user): """Sends a presence invite. """ yield self.store.add_presence_list_pending( observer_user.localpart, observed_user.to_string() ) if self.is_mine(observed_user): yield self.invite_presence(observed_user, observer_user) else: yield self.federation.send_edu( destination=observed_user.domain, edu_type="m.presence_invite", content={ "observed_user": observed_user.to_string(), "observer_user": observer_user.to_string(), } ) @defer.inlineCallbacks def invite_presence(self, observed_user, observer_user): """Handles new presence invites. """ if not self.is_mine(observed_user): raise SynapseError(400, "User is not hosted on this Home Server") # TODO: Don't auto accept if self.is_mine(observer_user): yield self.accept_presence(observed_user, observer_user) else: self.federation.send_edu( destination=observer_user.domain, edu_type="m.presence_accept", content={ "observed_user": observed_user.to_string(), "observer_user": observer_user.to_string(), } ) state_dict = yield self.get_state(observed_user, as_event=False) state_dict = format_user_presence_state(state_dict, self.clock.time_msec()) self.federation.send_edu( destination=observer_user.domain, edu_type="m.presence", content={ "push": [state_dict] } ) @defer.inlineCallbacks def accept_presence(self, observed_user, observer_user): """Handles a m.presence_accept EDU. Mark a presence invite from a local or remote user as accepted in a local user's presence list. Starts polling for presence updates from the local or remote user. Args: observed_user(UserID): The user to update in the presence list. observer_user(UserID): The owner of the presence list to update. """ yield self.store.set_presence_list_accepted( observer_user.localpart, observed_user.to_string() ) @defer.inlineCallbacks def deny_presence(self, observed_user, observer_user): """Handle a m.presence_deny EDU. Removes a local or remote user from a local user's presence list. Args: observed_user(UserID): The local or remote user to remove from the list. observer_user(UserID): The local owner of the presence list. Returns: A Deferred. """ yield self.store.del_presence_list( observer_user.localpart, observed_user.to_string() ) # TODO(paul): Inform the user somehow? @defer.inlineCallbacks def drop(self, observed_user, observer_user): """Remove a local or remote user from a local user's presence list and unsubscribe the local user from updates that user. Args: observed_user(UserId): The local or remote user to remove from the list. observer_user(UserId): The local owner of the presence list. Returns: A Deferred. """ if not self.is_mine(observer_user): raise SynapseError(400, "User is not hosted on this Home Server") yield self.store.del_presence_list( observer_user.localpart, observed_user.to_string() ) # TODO: Inform the remote that we've dropped the presence list. @defer.inlineCallbacks def is_visible(self, observed_user, observer_user): """Returns whether a user can see another user's presence. """ observer_room_ids = yield self.store.get_rooms_for_user( observer_user.to_string() ) observed_room_ids = yield self.store.get_rooms_for_user( observed_user.to_string() ) if observer_room_ids & observed_room_ids: defer.returnValue(True) accepted_observers = yield self.store.get_presence_list_observers_accepted( observed_user.to_string() ) defer.returnValue(observer_user.to_string() in accepted_observers) @defer.inlineCallbacks def get_all_presence_updates(self, last_id, current_id): """ Gets a list of presence update rows from between the given stream ids. Each row has: - stream_id(str) - user_id(str) - state(str) - last_active_ts(int) - last_federation_update_ts(int) - last_user_sync_ts(int) - status_msg(int) - currently_active(int) """ # TODO(markjh): replicate the unpersisted changes. # This could use the in-memory stores for recent changes. rows = yield self.store.get_all_presence_updates(last_id, current_id) defer.returnValue(rows) def should_notify(old_state, new_state): """Decides if a presence state change should be sent to interested parties. """ if old_state == new_state: return False if old_state.status_msg != new_state.status_msg: notify_reason_counter.inc("status_msg_change") return True if old_state.state != new_state.state: notify_reason_counter.inc("state_change") state_transition_counter.inc(old_state.state, new_state.state) return True if old_state.state == PresenceState.ONLINE: if new_state.currently_active != old_state.currently_active: notify_reason_counter.inc("current_active_change") return True if new_state.last_active_ts - old_state.last_active_ts > LAST_ACTIVE_GRANULARITY: # Only notify about last active bumps if we're not currently acive if not new_state.currently_active: notify_reason_counter.inc("last_active_change_online") return True elif new_state.last_active_ts - old_state.last_active_ts > LAST_ACTIVE_GRANULARITY: # Always notify for a transition where last active gets bumped. notify_reason_counter.inc("last_active_change_not_online") return True return False def format_user_presence_state(state, now, include_user_id=True): """Convert UserPresenceState to a format that can be sent down to clients and to other servers. The "user_id" is optional so that this function can be used to format presence updates for client /sync responses and for federation /send requests. """ content = { "presence": state.state, } if include_user_id: content["user_id"] = state.user_id if state.last_active_ts: content["last_active_ago"] = now - state.last_active_ts if state.status_msg and state.state != PresenceState.OFFLINE: content["status_msg"] = state.status_msg if state.state == PresenceState.ONLINE: content["currently_active"] = state.currently_active return content class PresenceEventSource(object): def __init__(self, hs): # We can't call get_presence_handler here because there's a cycle: # # Presence -> Notifier -> PresenceEventSource -> Presence # self.get_presence_handler = hs.get_presence_handler self.clock = hs.get_clock() self.store = hs.get_datastore() self.state = hs.get_state_handler() @defer.inlineCallbacks @log_function def get_new_events(self, user, from_key, room_ids=None, include_offline=True, explicit_room_id=None, **kwargs): # The process for getting presence events are: # 1. Get the rooms the user is in. # 2. Get the list of user in the rooms. # 3. Get the list of users that are in the user's presence list. # 4. If there is a from_key set, cross reference the list of users # with the `presence_stream_cache` to see which ones we actually # need to check. # 5. Load current state for the users. # # We don't try and limit the presence updates by the current token, as # sending down the rare duplicate is not a concern. with Measure(self.clock, "presence.get_new_events"): if from_key is not None: from_key = int(from_key) presence = self.get_presence_handler() stream_change_cache = self.store.presence_stream_cache max_token = self.store.get_current_presence_token() users_interested_in = yield self._get_interested_in(user, explicit_room_id) user_ids_changed = set() changed = None if from_key: changed = stream_change_cache.get_all_entities_changed(from_key) if changed is not None and len(changed) < 500: # For small deltas, its quicker to get all changes and then # work out if we share a room or they're in our presence list get_updates_counter.inc("stream") for other_user_id in changed: if other_user_id in users_interested_in: user_ids_changed.add(other_user_id) else: # Too many possible updates. Find all users we can see and check # if any of them have changed. get_updates_counter.inc("full") if from_key: user_ids_changed = stream_change_cache.get_entities_changed( users_interested_in, from_key, ) else: user_ids_changed = users_interested_in updates = yield presence.current_state_for_users(user_ids_changed) if include_offline: defer.returnValue((updates.values(), max_token)) else: defer.returnValue(([ s for s in updates.itervalues() if s.state != PresenceState.OFFLINE ], max_token)) def get_current_key(self): return self.store.get_current_presence_token() def get_pagination_rows(self, user, pagination_config, key): return self.get_new_events(user, from_key=None, include_offline=False) @cachedInlineCallbacks(num_args=2, cache_context=True) def _get_interested_in(self, user, explicit_room_id, cache_context): """Returns the set of users that the given user should see presence updates for """ user_id = user.to_string() plist = yield self.store.get_presence_list_accepted( user.localpart, on_invalidate=cache_context.invalidate, ) users_interested_in = set(row["observed_user_id"] for row in plist) users_interested_in.add(user_id) # So that we receive our own presence users_who_share_room = yield self.store.get_users_who_share_room_with_user( user_id, on_invalidate=cache_context.invalidate, ) users_interested_in.update(users_who_share_room) if explicit_room_id: user_ids = yield self.store.get_users_in_room( explicit_room_id, on_invalidate=cache_context.invalidate, ) users_interested_in.update(user_ids) defer.returnValue(users_interested_in) def handle_timeouts(user_states, is_mine_fn, syncing_user_ids, now): """Checks the presence of users that have timed out and updates as appropriate. Args: user_states(list): List of UserPresenceState's to check. is_mine_fn (fn): Function that returns if a user_id is ours syncing_user_ids (set): Set of user_ids with active syncs. now (int): Current time in ms. Returns: List of UserPresenceState updates """ changes = {} # Actual changes we need to notify people about for state in user_states: is_mine = is_mine_fn(state.user_id) new_state = handle_timeout(state, is_mine, syncing_user_ids, now) if new_state: changes[state.user_id] = new_state return changes.values() def handle_timeout(state, is_mine, syncing_user_ids, now): """Checks the presence of the user to see if any of the timers have elapsed Args: state (UserPresenceState) is_mine (bool): Whether the user is ours syncing_user_ids (set): Set of user_ids with active syncs. now (int): Current time in ms. Returns: A UserPresenceState update or None if no update. """ if state.state == PresenceState.OFFLINE: # No timeouts are associated with offline states. return None changed = False user_id = state.user_id if is_mine: if state.state == PresenceState.ONLINE: if now - state.last_active_ts > IDLE_TIMER: # Currently online, but last activity ages ago so auto # idle state = state.copy_and_replace( state=PresenceState.UNAVAILABLE, ) changed = True elif now - state.last_active_ts > LAST_ACTIVE_GRANULARITY: # So that we send down a notification that we've # stopped updating. changed = True if now - state.last_federation_update_ts > FEDERATION_PING_INTERVAL: # Need to send ping to other servers to ensure they don't # timeout and set us to offline changed = True # If there are have been no sync for a while (and none ongoing), # set presence to offline if user_id not in syncing_user_ids: # If the user has done something recently but hasn't synced, # don't set them as offline. sync_or_active = max(state.last_user_sync_ts, state.last_active_ts) if now - sync_or_active > SYNC_ONLINE_TIMEOUT: state = state.copy_and_replace( state=PresenceState.OFFLINE, status_msg=None, ) changed = True else: # We expect to be poked occaisonally by the other side. # This is to protect against forgetful/buggy servers, so that # no one gets stuck online forever. if now - state.last_federation_update_ts > FEDERATION_TIMEOUT: # The other side seems to have disappeared. state = state.copy_and_replace( state=PresenceState.OFFLINE, status_msg=None, ) changed = True return state if changed else None def handle_update(prev_state, new_state, is_mine, wheel_timer, now): """Given a presence update: 1. Add any appropriate timers. 2. Check if we should notify anyone. Args: prev_state (UserPresenceState) new_state (UserPresenceState) is_mine (bool): Whether the user is ours wheel_timer (WheelTimer) now (int): Time now in ms Returns: 3-tuple: `(new_state, persist_and_notify, federation_ping)` where: - new_state: is the state to actually persist - persist_and_notify (bool): whether to persist and notify people - federation_ping (bool): whether we should send a ping over federation """ user_id = new_state.user_id persist_and_notify = False federation_ping = False # If the users are ours then we want to set up a bunch of timers # to time things out. if is_mine: if new_state.state == PresenceState.ONLINE: # Idle timer wheel_timer.insert( now=now, obj=user_id, then=new_state.last_active_ts + IDLE_TIMER ) active = now - new_state.last_active_ts < LAST_ACTIVE_GRANULARITY new_state = new_state.copy_and_replace( currently_active=active, ) if active: wheel_timer.insert( now=now, obj=user_id, then=new_state.last_active_ts + LAST_ACTIVE_GRANULARITY ) if new_state.state != PresenceState.OFFLINE: # User has stopped syncing wheel_timer.insert( now=now, obj=user_id, then=new_state.last_user_sync_ts + SYNC_ONLINE_TIMEOUT ) last_federate = new_state.last_federation_update_ts if now - last_federate > FEDERATION_PING_INTERVAL: # Been a while since we've poked remote servers new_state = new_state.copy_and_replace( last_federation_update_ts=now, ) federation_ping = True else: wheel_timer.insert( now=now, obj=user_id, then=new_state.last_federation_update_ts + FEDERATION_TIMEOUT ) # Check whether the change was something worth notifying about if should_notify(prev_state, new_state): new_state = new_state.copy_and_replace( last_federation_update_ts=now, ) persist_and_notify = True return new_state, persist_and_notify, federation_ping @defer.inlineCallbacks def get_interested_parties(store, states): """Given a list of states return which entities (rooms, users) are interested in the given states. Args: states (list(UserPresenceState)) Returns: 2-tuple: `(room_ids_to_states, users_to_states)`, with each item being a dict of `entity_name` -> `[UserPresenceState]` """ room_ids_to_states = {} users_to_states = {} for state in states: room_ids = yield store.get_rooms_for_user(state.user_id) for room_id in room_ids: room_ids_to_states.setdefault(room_id, []).append(state) plist = yield store.get_presence_list_observers_accepted(state.user_id) for u in plist: users_to_states.setdefault(u, []).append(state) # Always notify self users_to_states.setdefault(state.user_id, []).append(state) defer.returnValue((room_ids_to_states, users_to_states)) @defer.inlineCallbacks def get_interested_remotes(store, states, state_handler): """Given a list of presence states figure out which remote servers should be sent which. All the presence states should be for local users only. Args: store (DataStore) states (list(UserPresenceState)) Returns: Deferred list of ([destinations], [UserPresenceState]), where for each row the list of UserPresenceState should be sent to each destination """ hosts_and_states = [] # First we look up the rooms each user is in (as well as any explicit # subscriptions), then for each distinct room we look up the remote # hosts in those rooms. room_ids_to_states, users_to_states = yield get_interested_parties(store, states) for room_id, states in room_ids_to_states.iteritems(): hosts = yield state_handler.get_current_hosts_in_room(room_id) hosts_and_states.append((hosts, states)) for user_id, states in users_to_states.iteritems(): host = get_domain_from_id(user_id) hosts_and_states.append(([host], states)) defer.returnValue(hosts_and_states) synapse-0.24.0/synapse/handlers/profile.py000066400000000000000000000236351317335640100205670ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from twisted.internet import defer import synapse.types from synapse.api.errors import SynapseError, AuthError, CodeMessageException from synapse.types import UserID, get_domain_from_id from ._base import BaseHandler logger = logging.getLogger(__name__) class ProfileHandler(BaseHandler): PROFILE_UPDATE_MS = 60 * 1000 PROFILE_UPDATE_EVERY_MS = 24 * 60 * 60 * 1000 def __init__(self, hs): super(ProfileHandler, self).__init__(hs) self.federation = hs.get_replication_layer() self.federation.register_query_handler( "profile", self.on_profile_query ) self.clock.looping_call(self._update_remote_profile_cache, self.PROFILE_UPDATE_MS) @defer.inlineCallbacks def get_profile(self, user_id): target_user = UserID.from_string(user_id) if self.hs.is_mine(target_user): displayname = yield self.store.get_profile_displayname( target_user.localpart ) avatar_url = yield self.store.get_profile_avatar_url( target_user.localpart ) defer.returnValue({ "displayname": displayname, "avatar_url": avatar_url, }) else: try: result = yield self.federation.make_query( destination=target_user.domain, query_type="profile", args={ "user_id": user_id, }, ignore_backoff=True, ) defer.returnValue(result) except CodeMessageException as e: if e.code != 404: logger.exception("Failed to get displayname") raise @defer.inlineCallbacks def get_profile_from_cache(self, user_id): """Get the profile information from our local cache. If the user is ours then the profile information will always be corect. Otherwise, it may be out of date/missing. """ target_user = UserID.from_string(user_id) if self.hs.is_mine(target_user): displayname = yield self.store.get_profile_displayname( target_user.localpart ) avatar_url = yield self.store.get_profile_avatar_url( target_user.localpart ) defer.returnValue({ "displayname": displayname, "avatar_url": avatar_url, }) else: profile = yield self.store.get_from_remote_profile_cache(user_id) defer.returnValue(profile or {}) @defer.inlineCallbacks def get_displayname(self, target_user): if self.hs.is_mine(target_user): displayname = yield self.store.get_profile_displayname( target_user.localpart ) defer.returnValue(displayname) else: try: result = yield self.federation.make_query( destination=target_user.domain, query_type="profile", args={ "user_id": target_user.to_string(), "field": "displayname", }, ignore_backoff=True, ) except CodeMessageException as e: if e.code != 404: logger.exception("Failed to get displayname") raise except: logger.exception("Failed to get displayname") else: defer.returnValue(result["displayname"]) @defer.inlineCallbacks def set_displayname(self, target_user, requester, new_displayname, by_admin=False): """target_user is the user whose displayname is to be changed; auth_user is the user attempting to make this change.""" if not self.hs.is_mine(target_user): raise SynapseError(400, "User is not hosted on this Home Server") if not by_admin and target_user != requester.user: raise AuthError(400, "Cannot set another user's displayname") if new_displayname == '': new_displayname = None yield self.store.set_profile_displayname( target_user.localpart, new_displayname ) yield self._update_join_states(requester) @defer.inlineCallbacks def get_avatar_url(self, target_user): if self.hs.is_mine(target_user): avatar_url = yield self.store.get_profile_avatar_url( target_user.localpart ) defer.returnValue(avatar_url) else: try: result = yield self.federation.make_query( destination=target_user.domain, query_type="profile", args={ "user_id": target_user.to_string(), "field": "avatar_url", }, ignore_backoff=True, ) except CodeMessageException as e: if e.code != 404: logger.exception("Failed to get avatar_url") raise except: logger.exception("Failed to get avatar_url") defer.returnValue(result["avatar_url"]) @defer.inlineCallbacks def set_avatar_url(self, target_user, requester, new_avatar_url, by_admin=False): """target_user is the user whose avatar_url is to be changed; auth_user is the user attempting to make this change.""" if not self.hs.is_mine(target_user): raise SynapseError(400, "User is not hosted on this Home Server") if not by_admin and target_user != requester.user: raise AuthError(400, "Cannot set another user's avatar_url") yield self.store.set_profile_avatar_url( target_user.localpart, new_avatar_url ) yield self._update_join_states(requester) @defer.inlineCallbacks def on_profile_query(self, args): user = UserID.from_string(args["user_id"]) if not self.hs.is_mine(user): raise SynapseError(400, "User is not hosted on this Home Server") just_field = args.get("field", None) response = {} if just_field is None or just_field == "displayname": response["displayname"] = yield self.store.get_profile_displayname( user.localpart ) if just_field is None or just_field == "avatar_url": response["avatar_url"] = yield self.store.get_profile_avatar_url( user.localpart ) defer.returnValue(response) @defer.inlineCallbacks def _update_join_states(self, requester): user = requester.user if not self.hs.is_mine(user): return yield self.ratelimit(requester) room_ids = yield self.store.get_rooms_for_user( user.to_string(), ) for room_id in room_ids: handler = self.hs.get_handlers().room_member_handler try: # Assume the user isn't a guest because we don't let guests set # profile or avatar data. # XXX why are we recreating `requester` here for each room? # what was wrong with the `requester` we were passed? requester = synapse.types.create_requester(user) yield handler.update_membership( requester, user, room_id, "join", # We treat a profile update like a join. ratelimit=False, # Try to hide that these events aren't atomic. ) except Exception as e: logger.warn( "Failed to update join event for room %s - %s", room_id, str(e.message) ) def _update_remote_profile_cache(self): """Called periodically to check profiles of remote users we haven't checked in a while. """ entries = yield self.store.get_remote_profile_cache_entries_that_expire( last_checked=self.clock.time_msec() - self.PROFILE_UPDATE_EVERY_MS ) for user_id, displayname, avatar_url in entries: is_subscribed = yield self.store.is_subscribed_remote_profile_for_user( user_id, ) if not is_subscribed: yield self.store.maybe_delete_remote_profile_cache(user_id) continue try: profile = yield self.federation.make_query( destination=get_domain_from_id(user_id), query_type="profile", args={ "user_id": user_id, }, ignore_backoff=True, ) except: logger.exception("Failed to get avatar_url") yield self.store.update_remote_profile_cache( user_id, displayname, avatar_url ) continue new_name = profile.get("displayname") new_avatar = profile.get("avatar_url") # We always hit update to update the last_check timestamp yield self.store.update_remote_profile_cache( user_id, new_name, new_avatar ) synapse-0.24.0/synapse/handlers/read_marker.py000066400000000000000000000045261317335640100214010ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseHandler from twisted.internet import defer from synapse.util.async import Linearizer import logging logger = logging.getLogger(__name__) class ReadMarkerHandler(BaseHandler): def __init__(self, hs): super(ReadMarkerHandler, self).__init__(hs) self.server_name = hs.config.server_name self.store = hs.get_datastore() self.read_marker_linearizer = Linearizer(name="read_marker") self.notifier = hs.get_notifier() @defer.inlineCallbacks def received_client_read_marker(self, room_id, user_id, event_id): """Updates the read marker for a given user in a given room if the event ID given is ahead in the stream relative to the current read marker. This uses a notifier to indicate that account data should be sent down /sync if the read marker has changed. """ with (yield self.read_marker_linearizer.queue((room_id, user_id))): account_data = yield self.store.get_account_data_for_room(user_id, room_id) existing_read_marker = account_data.get("m.fully_read", None) should_update = True if existing_read_marker: # Only update if the new marker is ahead in the stream should_update = yield self.store.is_event_after( event_id, existing_read_marker['event_id'] ) if should_update: content = { "event_id": event_id } max_id = yield self.store.add_account_data_to_room( user_id, room_id, "m.fully_read", content ) self.notifier.on_new_event("account_data_key", max_id, users=[user_id]) synapse-0.24.0/synapse/handlers/receipts.py000066400000000000000000000163111317335640100207360ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.util import logcontext from ._base import BaseHandler from twisted.internet import defer from synapse.util.logcontext import PreserveLoggingContext from synapse.types import get_domain_from_id import logging logger = logging.getLogger(__name__) class ReceiptsHandler(BaseHandler): def __init__(self, hs): super(ReceiptsHandler, self).__init__(hs) self.server_name = hs.config.server_name self.store = hs.get_datastore() self.hs = hs self.federation = hs.get_federation_sender() hs.get_replication_layer().register_edu_handler( "m.receipt", self._received_remote_receipt ) self.clock = self.hs.get_clock() self.state = hs.get_state_handler() @defer.inlineCallbacks def received_client_receipt(self, room_id, receipt_type, user_id, event_id): """Called when a client tells us a local user has read up to the given event_id in the room. """ receipt = { "room_id": room_id, "receipt_type": receipt_type, "user_id": user_id, "event_ids": [event_id], "data": { "ts": int(self.clock.time_msec()), } } is_new = yield self._handle_new_receipts([receipt]) if is_new: # fire off a process in the background to send the receipt to # remote servers self._push_remotes([receipt]) @defer.inlineCallbacks def _received_remote_receipt(self, origin, content): """Called when we receive an EDU of type m.receipt from a remote HS. """ receipts = [ { "room_id": room_id, "receipt_type": receipt_type, "user_id": user_id, "event_ids": user_values["event_ids"], "data": user_values.get("data", {}), } for room_id, room_values in content.items() for receipt_type, users in room_values.items() for user_id, user_values in users.items() ] yield self._handle_new_receipts(receipts) @defer.inlineCallbacks def _handle_new_receipts(self, receipts): """Takes a list of receipts, stores them and informs the notifier. """ min_batch_id = None max_batch_id = None for receipt in receipts: room_id = receipt["room_id"] receipt_type = receipt["receipt_type"] user_id = receipt["user_id"] event_ids = receipt["event_ids"] data = receipt["data"] res = yield self.store.insert_receipt( room_id, receipt_type, user_id, event_ids, data ) if not res: # res will be None if this read receipt is 'old' continue stream_id, max_persisted_id = res if min_batch_id is None or stream_id < min_batch_id: min_batch_id = stream_id if max_batch_id is None or max_persisted_id > max_batch_id: max_batch_id = max_persisted_id if min_batch_id is None: # no new receipts defer.returnValue(False) affected_room_ids = list(set([r["room_id"] for r in receipts])) with PreserveLoggingContext(): self.notifier.on_new_event( "receipt_key", max_batch_id, rooms=affected_room_ids ) # Note that the min here shouldn't be relied upon to be accurate. self.hs.get_pusherpool().on_new_receipts( min_batch_id, max_batch_id, affected_room_ids ) defer.returnValue(True) @logcontext.preserve_fn # caller should not yield on this @defer.inlineCallbacks def _push_remotes(self, receipts): """Given a list of receipts, works out which remote servers should be poked and pokes them. """ # TODO: Some of this stuff should be coallesced. for receipt in receipts: room_id = receipt["room_id"] receipt_type = receipt["receipt_type"] user_id = receipt["user_id"] event_ids = receipt["event_ids"] data = receipt["data"] users = yield self.state.get_current_user_in_room(room_id) remotedomains = set(get_domain_from_id(u) for u in users) remotedomains = remotedomains.copy() remotedomains.discard(self.server_name) logger.debug("Sending receipt to: %r", remotedomains) for domain in remotedomains: self.federation.send_edu( destination=domain, edu_type="m.receipt", content={ room_id: { receipt_type: { user_id: { "event_ids": event_ids, "data": data, } } }, }, key=(room_id, receipt_type, user_id), ) @defer.inlineCallbacks def get_receipts_for_room(self, room_id, to_key): """Gets all receipts for a room, upto the given key. """ result = yield self.store.get_linearized_receipts_for_room( room_id, to_key=to_key, ) if not result: defer.returnValue([]) defer.returnValue(result) class ReceiptEventSource(object): def __init__(self, hs): self.store = hs.get_datastore() @defer.inlineCallbacks def get_new_events(self, from_key, room_ids, **kwargs): from_key = int(from_key) to_key = yield self.get_current_key() if from_key == to_key: defer.returnValue(([], to_key)) events = yield self.store.get_linearized_receipts_for_rooms( room_ids, from_key=from_key, to_key=to_key, ) defer.returnValue((events, to_key)) def get_current_key(self, direction='f'): return self.store.get_max_receipt_stream_id() @defer.inlineCallbacks def get_pagination_rows(self, user, config, key): to_key = int(config.from_key) if config.to_key: from_key = int(config.to_key) else: from_key = None room_ids = yield self.store.get_rooms_for_user(user.to_string()) events = yield self.store.get_linearized_receipts_for_rooms( room_ids, from_key=from_key, to_key=to_key, ) defer.returnValue((events, to_key)) synapse-0.24.0/synapse/handlers/register.py000066400000000000000000000372641317335640100207560ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Contains functions for registering clients.""" import logging import urllib from twisted.internet import defer from synapse.api.errors import ( AuthError, Codes, SynapseError, RegistrationError, InvalidCaptchaError ) from synapse.http.client import CaptchaServerHttpClient from synapse.types import UserID from synapse.util.async import run_on_reactor from ._base import BaseHandler logger = logging.getLogger(__name__) class RegistrationHandler(BaseHandler): def __init__(self, hs): super(RegistrationHandler, self).__init__(hs) self.auth = hs.get_auth() self.profile_handler = hs.get_profile_handler() self.captcha_client = CaptchaServerHttpClient(hs) self._next_generated_user_id = None self.macaroon_gen = hs.get_macaroon_generator() @defer.inlineCallbacks def check_username(self, localpart, guest_access_token=None, assigned_user_id=None): yield run_on_reactor() if urllib.quote(localpart.encode('utf-8')) != localpart: raise SynapseError( 400, "User ID can only contain characters a-z, 0-9, or '_-./'", Codes.INVALID_USERNAME ) if not localpart: raise SynapseError( 400, "User ID cannot be empty", Codes.INVALID_USERNAME ) if localpart[0] == '_': raise SynapseError( 400, "User ID may not begin with _", Codes.INVALID_USERNAME ) user = UserID(localpart, self.hs.hostname) user_id = user.to_string() if assigned_user_id: if user_id == assigned_user_id: return else: raise SynapseError( 400, "A different user ID has already been registered for this session", ) yield self.check_user_id_not_appservice_exclusive(user_id) users = yield self.store.get_users_by_id_case_insensitive(user_id) if users: if not guest_access_token: raise SynapseError( 400, "User ID already taken.", errcode=Codes.USER_IN_USE, ) user_data = yield self.auth.get_user_by_access_token(guest_access_token) if not user_data["is_guest"] or user_data["user"].localpart != localpart: raise AuthError( 403, "Cannot register taken user ID without valid guest " "credentials for that user.", errcode=Codes.FORBIDDEN, ) @defer.inlineCallbacks def register( self, localpart=None, password=None, generate_token=True, guest_access_token=None, make_guest=False, admin=False, ): """Registers a new client on the server. Args: localpart : The local part of the user ID to register. If None, one will be generated. password (str) : The password to assign to this user so they can login again. This can be None which means they cannot login again via a password (e.g. the user is an application service user). generate_token (bool): Whether a new access token should be generated. Having this be True should be considered deprecated, since it offers no means of associating a device_id with the access_token. Instead you should call auth_handler.issue_access_token after registration. Returns: A tuple of (user_id, access_token). Raises: RegistrationError if there was a problem registering. """ yield run_on_reactor() password_hash = None if password: password_hash = self.auth_handler().hash(password) if localpart: yield self.check_username(localpart, guest_access_token=guest_access_token) was_guest = guest_access_token is not None if not was_guest: try: int(localpart) raise RegistrationError( 400, "Numeric user IDs are reserved for guest users." ) except ValueError: pass user = UserID(localpart, self.hs.hostname) user_id = user.to_string() token = None if generate_token: token = self.macaroon_gen.generate_access_token(user_id) yield self.store.register( user_id=user_id, token=token, password_hash=password_hash, was_guest=was_guest, make_guest=make_guest, create_profile_with_localpart=( # If the user was a guest then they already have a profile None if was_guest else user.localpart ), admin=admin, ) else: # autogen a sequential user ID attempts = 0 token = None user = None while not user: localpart = yield self._generate_user_id(attempts > 0) user = UserID(localpart, self.hs.hostname) user_id = user.to_string() yield self.check_user_id_not_appservice_exclusive(user_id) if generate_token: token = self.macaroon_gen.generate_access_token(user_id) try: yield self.store.register( user_id=user_id, token=token, password_hash=password_hash, make_guest=make_guest, create_profile_with_localpart=user.localpart, ) except SynapseError: # if user id is taken, just generate another user = None user_id = None token = None attempts += 1 # We used to generate default identicons here, but nowadays # we want clients to generate their own as part of their branding # rather than there being consistent matrix-wide ones, so we don't. defer.returnValue((user_id, token)) @defer.inlineCallbacks def appservice_register(self, user_localpart, as_token): user = UserID(user_localpart, self.hs.hostname) user_id = user.to_string() service = self.store.get_app_service_by_token(as_token) if not service: raise AuthError(403, "Invalid application service token.") if not service.is_interested_in_user(user_id): raise SynapseError( 400, "Invalid user localpart for this application service.", errcode=Codes.EXCLUSIVE ) service_id = service.id if service.is_exclusive_user(user_id) else None yield self.check_user_id_not_appservice_exclusive( user_id, allowed_appservice=service ) yield self.store.register( user_id=user_id, password_hash="", appservice_id=service_id, create_profile_with_localpart=user.localpart, ) defer.returnValue(user_id) @defer.inlineCallbacks def check_recaptcha(self, ip, private_key, challenge, response): """ Checks a recaptcha is correct. Used only by c/s api v1 """ captcha_response = yield self._validate_captcha( ip, private_key, challenge, response ) if not captcha_response["valid"]: logger.info("Invalid captcha entered from %s. Error: %s", ip, captcha_response["error_url"]) raise InvalidCaptchaError( error_url=captcha_response["error_url"] ) else: logger.info("Valid captcha entered from %s", ip) @defer.inlineCallbacks def register_saml2(self, localpart): """ Registers email_id as SAML2 Based Auth. """ if urllib.quote(localpart) != localpart: raise SynapseError( 400, "User ID must only contain characters which do not" " require URL encoding." ) user = UserID(localpart, self.hs.hostname) user_id = user.to_string() yield self.check_user_id_not_appservice_exclusive(user_id) token = self.macaroon_gen.generate_access_token(user_id) try: yield self.store.register( user_id=user_id, token=token, password_hash=None, create_profile_with_localpart=user.localpart, ) except Exception as e: yield self.store.add_access_token_to_user(user_id, token) # Ignore Registration errors logger.exception(e) defer.returnValue((user_id, token)) @defer.inlineCallbacks def register_email(self, threepidCreds): """ Registers emails with an identity server. Used only by c/s api v1 """ for c in threepidCreds: logger.info("validating theeepidcred sid %s on id server %s", c['sid'], c['idServer']) try: identity_handler = self.hs.get_handlers().identity_handler threepid = yield identity_handler.threepid_from_creds(c) except: logger.exception("Couldn't validate 3pid") raise RegistrationError(400, "Couldn't validate 3pid") if not threepid: raise RegistrationError(400, "Couldn't validate 3pid") logger.info("got threepid with medium '%s' and address '%s'", threepid['medium'], threepid['address']) @defer.inlineCallbacks def bind_emails(self, user_id, threepidCreds): """Links emails with a user ID and informs an identity server. Used only by c/s api v1 """ # Now we have a matrix ID, bind it to the threepids we were given for c in threepidCreds: identity_handler = self.hs.get_handlers().identity_handler # XXX: This should be a deferred list, shouldn't it? yield identity_handler.bind_threepid(c, user_id) def check_user_id_not_appservice_exclusive(self, user_id, allowed_appservice=None): # valid user IDs must not clash with any user ID namespaces claimed by # application services. services = self.store.get_app_services() interested_services = [ s for s in services if s.is_interested_in_user(user_id) and s != allowed_appservice ] for service in interested_services: if service.is_exclusive_user(user_id): raise SynapseError( 400, "This user ID is reserved by an application service.", errcode=Codes.EXCLUSIVE ) @defer.inlineCallbacks def _generate_user_id(self, reseed=False): if reseed or self._next_generated_user_id is None: self._next_generated_user_id = ( yield self.store.find_next_generated_user_id_localpart() ) id = self._next_generated_user_id self._next_generated_user_id += 1 defer.returnValue(str(id)) @defer.inlineCallbacks def _validate_captcha(self, ip_addr, private_key, challenge, response): """Validates the captcha provided. Used only by c/s api v1 Returns: dict: Containing 'valid'(bool) and 'error_url'(str) if invalid. """ response = yield self._submit_captcha(ip_addr, private_key, challenge, response) # parse Google's response. Lovely format.. lines = response.split('\n') json = { "valid": lines[0] == 'true', "error_url": "http://www.google.com/recaptcha/api/challenge?" + "error=%s" % lines[1] } defer.returnValue(json) @defer.inlineCallbacks def _submit_captcha(self, ip_addr, private_key, challenge, response): """ Used only by c/s api v1 """ data = yield self.captcha_client.post_urlencoded_get_raw( "http://www.google.com:80/recaptcha/api/verify", args={ 'privatekey': private_key, 'remoteip': ip_addr, 'challenge': challenge, 'response': response } ) defer.returnValue(data) @defer.inlineCallbacks def get_or_create_user(self, requester, localpart, displayname, password_hash=None): """Creates a new user if the user does not exist, else revokes all previous access tokens and generates a new one. Args: localpart : The local part of the user ID to register. If None, one will be randomly generated. Returns: A tuple of (user_id, access_token). Raises: RegistrationError if there was a problem registering. """ yield run_on_reactor() if localpart is None: raise SynapseError(400, "Request must include user id") need_register = True try: yield self.check_username(localpart) except SynapseError as e: if e.errcode == Codes.USER_IN_USE: need_register = False else: raise user = UserID(localpart, self.hs.hostname) user_id = user.to_string() token = self.macaroon_gen.generate_access_token(user_id) if need_register: yield self.store.register( user_id=user_id, token=token, password_hash=password_hash, create_profile_with_localpart=user.localpart, ) else: yield self.store.user_delete_access_tokens(user_id=user_id) yield self.store.add_access_token_to_user(user_id=user_id, token=token) if displayname is not None: logger.info("setting user display name: %s -> %s", user_id, displayname) yield self.profile_handler.set_displayname( user, requester, displayname, by_admin=True, ) defer.returnValue((user_id, token)) def auth_handler(self): return self.hs.get_auth_handler() @defer.inlineCallbacks def guest_access_token_for(self, medium, address, inviter_user_id): access_token = yield self.store.get_3pid_guest_access_token(medium, address) if access_token: defer.returnValue(access_token) _, access_token = yield self.register( generate_token=True, make_guest=True ) access_token = yield self.store.save_or_get_3pid_guest_access_token( medium, address, access_token, inviter_user_id ) defer.returnValue(access_token) synapse-0.24.0/synapse/handlers/room.py000066400000000000000000000406161317335640100201010ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Contains functions for performing events on rooms.""" from twisted.internet import defer from ._base import BaseHandler from synapse.types import UserID, RoomAlias, RoomID, RoomStreamToken from synapse.api.constants import ( EventTypes, JoinRules, RoomCreationPreset ) from synapse.api.errors import AuthError, StoreError, SynapseError from synapse.util import stringutils from synapse.visibility import filter_events_for_client from collections import OrderedDict import logging import math import string logger = logging.getLogger(__name__) id_server_scheme = "https://" class RoomCreationHandler(BaseHandler): PRESETS_DICT = { RoomCreationPreset.PRIVATE_CHAT: { "join_rules": JoinRules.INVITE, "history_visibility": "shared", "original_invitees_have_ops": False, "guest_can_join": True, }, RoomCreationPreset.TRUSTED_PRIVATE_CHAT: { "join_rules": JoinRules.INVITE, "history_visibility": "shared", "original_invitees_have_ops": True, "guest_can_join": True, }, RoomCreationPreset.PUBLIC_CHAT: { "join_rules": JoinRules.PUBLIC, "history_visibility": "shared", "original_invitees_have_ops": False, "guest_can_join": False, }, } def __init__(self, hs): super(RoomCreationHandler, self).__init__(hs) self.spam_checker = hs.get_spam_checker() @defer.inlineCallbacks def create_room(self, requester, config, ratelimit=True): """ Creates a new room. Args: requester (Requester): The user who requested the room creation. config (dict) : A dict of configuration options. Returns: The new room ID. Raises: SynapseError if the room ID couldn't be stored, or something went horribly wrong. """ user_id = requester.user.to_string() if not self.spam_checker.user_may_create_room(user_id): raise SynapseError(403, "You are not permitted to create rooms") if ratelimit: yield self.ratelimit(requester) if "room_alias_name" in config: for wchar in string.whitespace: if wchar in config["room_alias_name"]: raise SynapseError(400, "Invalid characters in room alias") room_alias = RoomAlias.create( config["room_alias_name"], self.hs.hostname, ) mapping = yield self.store.get_association_from_room_alias( room_alias ) if mapping: raise SynapseError(400, "Room alias already taken") else: room_alias = None invite_list = config.get("invite", []) for i in invite_list: try: UserID.from_string(i) except: raise SynapseError(400, "Invalid user_id: %s" % (i,)) invite_3pid_list = config.get("invite_3pid", []) visibility = config.get("visibility", None) is_public = visibility == "public" # autogen room IDs and try to create it. We may clash, so just # try a few times till one goes through, giving up eventually. attempts = 0 room_id = None while attempts < 5: try: random_string = stringutils.random_string(18) gen_room_id = RoomID.create( random_string, self.hs.hostname, ) yield self.store.store_room( room_id=gen_room_id.to_string(), room_creator_user_id=user_id, is_public=is_public ) room_id = gen_room_id.to_string() break except StoreError: attempts += 1 if not room_id: raise StoreError(500, "Couldn't generate a room ID.") if room_alias: directory_handler = self.hs.get_handlers().directory_handler yield directory_handler.create_association( user_id=user_id, room_id=room_id, room_alias=room_alias, servers=[self.hs.hostname], ) preset_config = config.get( "preset", RoomCreationPreset.PRIVATE_CHAT if visibility == "private" else RoomCreationPreset.PUBLIC_CHAT ) raw_initial_state = config.get("initial_state", []) initial_state = OrderedDict() for val in raw_initial_state: initial_state[(val["type"], val.get("state_key", ""))] = val["content"] creation_content = config.get("creation_content", {}) msg_handler = self.hs.get_handlers().message_handler room_member_handler = self.hs.get_handlers().room_member_handler yield self._send_events_for_new_room( requester, room_id, msg_handler, room_member_handler, preset_config=preset_config, invite_list=invite_list, initial_state=initial_state, creation_content=creation_content, room_alias=room_alias, power_level_content_override=config.get("power_level_content_override", {}) ) if "name" in config: name = config["name"] yield msg_handler.create_and_send_nonmember_event( requester, { "type": EventTypes.Name, "room_id": room_id, "sender": user_id, "state_key": "", "content": {"name": name}, }, ratelimit=False) if "topic" in config: topic = config["topic"] yield msg_handler.create_and_send_nonmember_event( requester, { "type": EventTypes.Topic, "room_id": room_id, "sender": user_id, "state_key": "", "content": {"topic": topic}, }, ratelimit=False) content = {} is_direct = config.get("is_direct", None) if is_direct: content["is_direct"] = is_direct for invitee in invite_list: yield room_member_handler.update_membership( requester, UserID.from_string(invitee), room_id, "invite", ratelimit=False, content=content, ) for invite_3pid in invite_3pid_list: id_server = invite_3pid["id_server"] address = invite_3pid["address"] medium = invite_3pid["medium"] yield self.hs.get_handlers().room_member_handler.do_3pid_invite( room_id, requester.user, medium, address, id_server, requester, txn_id=None, ) result = {"room_id": room_id} if room_alias: result["room_alias"] = room_alias.to_string() yield directory_handler.send_room_alias_update_event( requester, user_id, room_id ) defer.returnValue(result) @defer.inlineCallbacks def _send_events_for_new_room( self, creator, # A Requester object. room_id, msg_handler, room_member_handler, preset_config, invite_list, initial_state, creation_content, room_alias, power_level_content_override, ): def create(etype, content, **kwargs): e = { "type": etype, "content": content, } e.update(event_keys) e.update(kwargs) return e @defer.inlineCallbacks def send(etype, content, **kwargs): event = create(etype, content, **kwargs) yield msg_handler.create_and_send_nonmember_event( creator, event, ratelimit=False ) config = RoomCreationHandler.PRESETS_DICT[preset_config] creator_id = creator.user.to_string() event_keys = { "room_id": room_id, "sender": creator_id, "state_key": "", } creation_content.update({"creator": creator_id}) yield send( etype=EventTypes.Create, content=creation_content, ) yield room_member_handler.update_membership( creator, creator.user, room_id, "join", ratelimit=False, ) # We treat the power levels override specially as this needs to be one # of the first events that get sent into a room. pl_content = initial_state.pop((EventTypes.PowerLevels, ''), None) if pl_content is not None: yield send( etype=EventTypes.PowerLevels, content=pl_content, ) else: power_level_content = { "users": { creator_id: 100, }, "users_default": 0, "events": { EventTypes.Name: 50, EventTypes.PowerLevels: 100, EventTypes.RoomHistoryVisibility: 100, EventTypes.CanonicalAlias: 50, EventTypes.RoomAvatar: 50, }, "events_default": 0, "state_default": 50, "ban": 50, "kick": 50, "redact": 50, "invite": 0, } if config["original_invitees_have_ops"]: for invitee in invite_list: power_level_content["users"][invitee] = 100 power_level_content.update(power_level_content_override) yield send( etype=EventTypes.PowerLevels, content=power_level_content, ) if room_alias and (EventTypes.CanonicalAlias, '') not in initial_state: yield send( etype=EventTypes.CanonicalAlias, content={"alias": room_alias.to_string()}, ) if (EventTypes.JoinRules, '') not in initial_state: yield send( etype=EventTypes.JoinRules, content={"join_rule": config["join_rules"]}, ) if (EventTypes.RoomHistoryVisibility, '') not in initial_state: yield send( etype=EventTypes.RoomHistoryVisibility, content={"history_visibility": config["history_visibility"]} ) if config["guest_can_join"]: if (EventTypes.GuestAccess, '') not in initial_state: yield send( etype=EventTypes.GuestAccess, content={"guest_access": "can_join"} ) for (etype, state_key), content in initial_state.items(): yield send( etype=etype, state_key=state_key, content=content, ) class RoomContextHandler(BaseHandler): @defer.inlineCallbacks def get_event_context(self, user, room_id, event_id, limit): """Retrieves events, pagination tokens and state around a given event in a room. Args: user (UserID) room_id (str) event_id (str) limit (int): The maximum number of events to return in total (excluding state). Returns: dict, or None if the event isn't found """ before_limit = math.floor(limit / 2.) after_limit = limit - before_limit now_token = yield self.hs.get_event_sources().get_current_token() users = yield self.store.get_users_in_room(room_id) is_peeking = user.to_string() not in users def filter_evts(events): return filter_events_for_client( self.store, user.to_string(), events, is_peeking=is_peeking ) event = yield self.store.get_event(event_id, get_prev_content=True, allow_none=True) if not event: defer.returnValue(None) return filtered = yield(filter_evts([event])) if not filtered: raise AuthError( 403, "You don't have permission to access that event." ) results = yield self.store.get_events_around( room_id, event_id, before_limit, after_limit ) results["events_before"] = yield filter_evts(results["events_before"]) results["events_after"] = yield filter_evts(results["events_after"]) results["event"] = event if results["events_after"]: last_event_id = results["events_after"][-1].event_id else: last_event_id = event_id state = yield self.store.get_state_for_events( [last_event_id], None ) results["state"] = state[last_event_id].values() results["start"] = now_token.copy_and_replace( "room_key", results["start"] ).to_string() results["end"] = now_token.copy_and_replace( "room_key", results["end"] ).to_string() defer.returnValue(results) class RoomEventSource(object): def __init__(self, hs): self.store = hs.get_datastore() @defer.inlineCallbacks def get_new_events( self, user, from_key, limit, room_ids, is_guest, explicit_room_id=None, ): # We just ignore the key for now. to_key = yield self.get_current_key() from_token = RoomStreamToken.parse(from_key) if from_token.topological: logger.warn("Stream has topological part!!!! %r", from_key) from_key = "s%s" % (from_token.stream,) app_service = self.store.get_app_service_by_user_id( user.to_string() ) if app_service: events, end_key = yield self.store.get_appservice_room_stream( service=app_service, from_key=from_key, to_key=to_key, limit=limit, ) else: room_events = yield self.store.get_membership_changes_for_user( user.to_string(), from_key, to_key ) room_to_events = yield self.store.get_room_events_stream_for_rooms( room_ids=room_ids, from_key=from_key, to_key=to_key, limit=limit or 10, order='ASC', ) events = list(room_events) events.extend(e for evs, _ in room_to_events.values() for e in evs) events.sort(key=lambda e: e.internal_metadata.order) if limit: events[:] = events[:limit] if events: end_key = events[-1].internal_metadata.after else: end_key = to_key defer.returnValue((events, end_key)) def get_current_key(self): return self.store.get_room_events_max_id() def get_current_key_for_room(self, room_id): return self.store.get_room_events_max_id(room_id) @defer.inlineCallbacks def get_pagination_rows(self, user, config, key): events, next_key = yield self.store.paginate_room_events( room_id=key, from_key=config.from_key, to_key=config.to_key, direction=config.direction, limit=config.limit, ) defer.returnValue((events, next_key)) synapse-0.24.0/synapse/handlers/room_list.py000066400000000000000000000440741317335640100211360ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from ._base import BaseHandler from synapse.api.constants import ( EventTypes, JoinRules, ) from synapse.util.async import concurrently_execute from synapse.util.caches.descriptors import cachedInlineCallbacks from synapse.util.caches.response_cache import ResponseCache from synapse.types import ThirdPartyInstanceID from collections import namedtuple from unpaddedbase64 import encode_base64, decode_base64 import logging import msgpack logger = logging.getLogger(__name__) REMOTE_ROOM_LIST_POLL_INTERVAL = 60 * 1000 # This is used to indicate we should only return rooms published to the main list. EMTPY_THIRD_PARTY_ID = ThirdPartyInstanceID(None, None) class RoomListHandler(BaseHandler): def __init__(self, hs): super(RoomListHandler, self).__init__(hs) self.response_cache = ResponseCache(hs) self.remote_response_cache = ResponseCache(hs, timeout_ms=30 * 1000) def get_local_public_room_list(self, limit=None, since_token=None, search_filter=None, network_tuple=EMTPY_THIRD_PARTY_ID,): """Generate a local public room list. There are multiple different lists: the main one plus one per third party network. A client can ask for a specific list or to return all. Args: limit (int) since_token (str) search_filter (dict) network_tuple (ThirdPartyInstanceID): Which public list to use. This can be (None, None) to indicate the main list, or a particular appservice and network id to use an appservice specific one. Setting to None returns all public rooms across all lists. """ logger.info( "Getting public room list: limit=%r, since=%r, search=%r, network=%r", limit, since_token, bool(search_filter), network_tuple, ) if search_filter: # We explicitly don't bother caching searches or requests for # appservice specific lists. return self._get_public_room_list( limit, since_token, search_filter, network_tuple=network_tuple, ) key = (limit, since_token, network_tuple) result = self.response_cache.get(key) if not result: result = self.response_cache.set( key, self._get_public_room_list( limit, since_token, network_tuple=network_tuple ) ) return result @defer.inlineCallbacks def _get_public_room_list(self, limit=None, since_token=None, search_filter=None, network_tuple=EMTPY_THIRD_PARTY_ID,): if since_token and since_token != "END": since_token = RoomListNextBatch.from_token(since_token) else: since_token = None rooms_to_order_value = {} rooms_to_num_joined = {} newly_visible = [] newly_unpublished = [] if since_token: stream_token = since_token.stream_ordering current_public_id = yield self.store.get_current_public_room_stream_id() public_room_stream_id = since_token.public_room_stream_id newly_visible, newly_unpublished = yield self.store.get_public_room_changes( public_room_stream_id, current_public_id, network_tuple=network_tuple, ) else: stream_token = yield self.store.get_room_max_stream_ordering() public_room_stream_id = yield self.store.get_current_public_room_stream_id() room_ids = yield self.store.get_public_room_ids_at_stream_id( public_room_stream_id, network_tuple=network_tuple, ) # We want to return rooms in a particular order: the number of joined # users. We then arbitrarily use the room_id as a tie breaker. @defer.inlineCallbacks def get_order_for_room(room_id): # Most of the rooms won't have changed between the since token and # now (especially if the since token is "now"). So, we can ask what # the current users are in a room (that will hit a cache) and then # check if the room has changed since the since token. (We have to # do it in that order to avoid races). # If things have changed then fall back to getting the current state # at the since token. joined_users = yield self.store.get_users_in_room(room_id) if self.store.has_room_changed_since(room_id, stream_token): latest_event_ids = yield self.store.get_forward_extremeties_for_room( room_id, stream_token ) if not latest_event_ids: return joined_users = yield self.state_handler.get_current_user_in_room( room_id, latest_event_ids, ) num_joined_users = len(joined_users) rooms_to_num_joined[room_id] = num_joined_users if num_joined_users == 0: return # We want larger rooms to be first, hence negating num_joined_users rooms_to_order_value[room_id] = (-num_joined_users, room_id) yield concurrently_execute(get_order_for_room, room_ids, 10) sorted_entries = sorted(rooms_to_order_value.items(), key=lambda e: e[1]) sorted_rooms = [room_id for room_id, _ in sorted_entries] # `sorted_rooms` should now be a list of all public room ids that is # stable across pagination. Therefore, we can use indices into this # list as our pagination tokens. # Filter out rooms that we don't want to return rooms_to_scan = [ r for r in sorted_rooms if r not in newly_unpublished and rooms_to_num_joined[room_id] > 0 ] total_room_count = len(rooms_to_scan) if since_token: # Filter out rooms we've already returned previously # `since_token.current_limit` is the index of the last room we # sent down, so we exclude it and everything before/after it. if since_token.direction_is_forward: rooms_to_scan = rooms_to_scan[since_token.current_limit + 1:] else: rooms_to_scan = rooms_to_scan[:since_token.current_limit] rooms_to_scan.reverse() # Actually generate the entries. _append_room_entry_to_chunk will append to # chunk but will stop if len(chunk) > limit chunk = [] if limit and not search_filter: step = limit + 1 for i in xrange(0, len(rooms_to_scan), step): # We iterate here because the vast majority of cases we'll stop # at first iteration, but occaisonally _append_room_entry_to_chunk # won't append to the chunk and so we need to loop again. # We don't want to scan over the entire range either as that # would potentially waste a lot of work. yield concurrently_execute( lambda r: self._append_room_entry_to_chunk( r, rooms_to_num_joined[r], chunk, limit, search_filter ), rooms_to_scan[i:i + step], 10 ) if len(chunk) >= limit + 1: break else: yield concurrently_execute( lambda r: self._append_room_entry_to_chunk( r, rooms_to_num_joined[r], chunk, limit, search_filter ), rooms_to_scan, 5 ) chunk.sort(key=lambda e: (-e["num_joined_members"], e["room_id"])) # Work out the new limit of the batch for pagination, or None if we # know there are no more results that would be returned. # i.e., [since_token.current_limit..new_limit] is the batch of rooms # we've returned (or the reverse if we paginated backwards) # We tried to pull out limit + 1 rooms above, so if we have <= limit # then we know there are no more results to return new_limit = None if chunk and (not limit or len(chunk) > limit): if not since_token or since_token.direction_is_forward: if limit: chunk = chunk[:limit] last_room_id = chunk[-1]["room_id"] else: if limit: chunk = chunk[-limit:] last_room_id = chunk[0]["room_id"] new_limit = sorted_rooms.index(last_room_id) results = { "chunk": chunk, "total_room_count_estimate": total_room_count, } if since_token: results["new_rooms"] = bool(newly_visible) if not since_token or since_token.direction_is_forward: if new_limit is not None: results["next_batch"] = RoomListNextBatch( stream_ordering=stream_token, public_room_stream_id=public_room_stream_id, current_limit=new_limit, direction_is_forward=True, ).to_token() if since_token: results["prev_batch"] = since_token.copy_and_replace( direction_is_forward=False, current_limit=since_token.current_limit + 1, ).to_token() else: if new_limit is not None: results["prev_batch"] = RoomListNextBatch( stream_ordering=stream_token, public_room_stream_id=public_room_stream_id, current_limit=new_limit, direction_is_forward=False, ).to_token() if since_token: results["next_batch"] = since_token.copy_and_replace( direction_is_forward=True, current_limit=since_token.current_limit - 1, ).to_token() defer.returnValue(results) @defer.inlineCallbacks def _append_room_entry_to_chunk(self, room_id, num_joined_users, chunk, limit, search_filter): """Generate the entry for a room in the public room list and append it to the `chunk` if it matches the search filter """ if limit and len(chunk) > limit + 1: # We've already got enough, so lets just drop it. return result = yield self.generate_room_entry(room_id, num_joined_users) if result and _matches_room_entry(result, search_filter): chunk.append(result) @cachedInlineCallbacks(num_args=1, cache_context=True) def generate_room_entry(self, room_id, num_joined_users, cache_context, with_alias=True, allow_private=False): """Returns the entry for a room """ result = { "room_id": room_id, "num_joined_members": num_joined_users, } current_state_ids = yield self.store.get_current_state_ids( room_id, on_invalidate=cache_context.invalidate, ) event_map = yield self.store.get_events([ event_id for key, event_id in current_state_ids.iteritems() if key[0] in ( EventTypes.JoinRules, EventTypes.Name, EventTypes.Topic, EventTypes.CanonicalAlias, EventTypes.RoomHistoryVisibility, EventTypes.GuestAccess, "m.room.avatar", ) ]) current_state = { (ev.type, ev.state_key): ev for ev in event_map.values() } # Double check that this is actually a public room. join_rules_event = current_state.get((EventTypes.JoinRules, "")) if join_rules_event: join_rule = join_rules_event.content.get("join_rule", None) if not allow_private and join_rule and join_rule != JoinRules.PUBLIC: defer.returnValue(None) if with_alias: aliases = yield self.store.get_aliases_for_room( room_id, on_invalidate=cache_context.invalidate ) if aliases: result["aliases"] = aliases name_event = yield current_state.get((EventTypes.Name, "")) if name_event: name = name_event.content.get("name", None) if name: result["name"] = name topic_event = current_state.get((EventTypes.Topic, "")) if topic_event: topic = topic_event.content.get("topic", None) if topic: result["topic"] = topic canonical_event = current_state.get((EventTypes.CanonicalAlias, "")) if canonical_event: canonical_alias = canonical_event.content.get("alias", None) if canonical_alias: result["canonical_alias"] = canonical_alias visibility_event = current_state.get((EventTypes.RoomHistoryVisibility, "")) visibility = None if visibility_event: visibility = visibility_event.content.get("history_visibility", None) result["world_readable"] = visibility == "world_readable" guest_event = current_state.get((EventTypes.GuestAccess, "")) guest = None if guest_event: guest = guest_event.content.get("guest_access", None) result["guest_can_join"] = guest == "can_join" avatar_event = current_state.get(("m.room.avatar", "")) if avatar_event: avatar_url = avatar_event.content.get("url", None) if avatar_url: result["avatar_url"] = avatar_url defer.returnValue(result) @defer.inlineCallbacks def get_remote_public_room_list(self, server_name, limit=None, since_token=None, search_filter=None, include_all_networks=False, third_party_instance_id=None,): if search_filter: # We currently don't support searching across federation, so we have # to do it manually without pagination limit = None since_token = None res = yield self._get_remote_list_cached( server_name, limit=limit, since_token=since_token, include_all_networks=include_all_networks, third_party_instance_id=third_party_instance_id, ) if search_filter: res = {"chunk": [ entry for entry in list(res.get("chunk", [])) if _matches_room_entry(entry, search_filter) ]} defer.returnValue(res) def _get_remote_list_cached(self, server_name, limit=None, since_token=None, search_filter=None, include_all_networks=False, third_party_instance_id=None,): repl_layer = self.hs.get_replication_layer() if search_filter: # We can't cache when asking for search return repl_layer.get_public_rooms( server_name, limit=limit, since_token=since_token, search_filter=search_filter, include_all_networks=include_all_networks, third_party_instance_id=third_party_instance_id, ) key = ( server_name, limit, since_token, include_all_networks, third_party_instance_id, ) result = self.remote_response_cache.get(key) if not result: result = self.remote_response_cache.set( key, repl_layer.get_public_rooms( server_name, limit=limit, since_token=since_token, search_filter=search_filter, include_all_networks=include_all_networks, third_party_instance_id=third_party_instance_id, ) ) return result class RoomListNextBatch(namedtuple("RoomListNextBatch", ( "stream_ordering", # stream_ordering of the first public room list "public_room_stream_id", # public room stream id for first public room list "current_limit", # The number of previous rooms returned "direction_is_forward", # Bool if this is a next_batch, false if prev_batch ))): KEY_DICT = { "stream_ordering": "s", "public_room_stream_id": "p", "current_limit": "n", "direction_is_forward": "d", } REVERSE_KEY_DICT = {v: k for k, v in KEY_DICT.items()} @classmethod def from_token(cls, token): return RoomListNextBatch(**{ cls.REVERSE_KEY_DICT[key]: val for key, val in msgpack.loads(decode_base64(token)).items() }) def to_token(self): return encode_base64(msgpack.dumps({ self.KEY_DICT[key]: val for key, val in self._asdict().items() })) def copy_and_replace(self, **kwds): return self._replace( **kwds ) def _matches_room_entry(room_entry, search_filter): if search_filter and search_filter.get("generic_search_term", None): generic_search_term = search_filter["generic_search_term"].upper() if generic_search_term in room_entry.get("name", "").upper(): return True elif generic_search_term in room_entry.get("topic", "").upper(): return True elif generic_search_term in room_entry.get("canonical_alias", "").upper(): return True else: return True return False synapse-0.24.0/synapse/handlers/room_member.py000066400000000000000000000725701317335640100214340ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from signedjson.key import decode_verify_key_bytes from signedjson.sign import verify_signed_json from twisted.internet import defer from unpaddedbase64 import decode_base64 import synapse.types from synapse.api.constants import ( EventTypes, Membership, ) from synapse.api.errors import AuthError, SynapseError, Codes from synapse.types import UserID, RoomID from synapse.util.async import Linearizer from synapse.util.distributor import user_left_room, user_joined_room from ._base import BaseHandler logger = logging.getLogger(__name__) id_server_scheme = "https://" class RoomMemberHandler(BaseHandler): # TODO(paul): This handler currently contains a messy conflation of # low-level API that works on UserID objects and so on, and REST-level # API that takes ID strings and returns pagination chunks. These concerns # ought to be separated out a lot better. def __init__(self, hs): super(RoomMemberHandler, self).__init__(hs) self.profile_handler = hs.get_profile_handler() self.member_linearizer = Linearizer(name="member") self.clock = hs.get_clock() self.spam_checker = hs.get_spam_checker() self.distributor = hs.get_distributor() self.distributor.declare("user_joined_room") self.distributor.declare("user_left_room") @defer.inlineCallbacks def _local_membership_update( self, requester, target, room_id, membership, prev_event_ids, txn_id=None, ratelimit=True, content=None, ): if content is None: content = {} msg_handler = self.hs.get_handlers().message_handler content["membership"] = membership if requester.is_guest: content["kind"] = "guest" event, context = yield msg_handler.create_event( requester, { "type": EventTypes.Member, "content": content, "room_id": room_id, "sender": requester.user.to_string(), "state_key": target.to_string(), # For backwards compatibility: "membership": membership, }, token_id=requester.access_token_id, txn_id=txn_id, prev_event_ids=prev_event_ids, ) # Check if this event matches the previous membership event for the user. duplicate = yield msg_handler.deduplicate_state_event(event, context) if duplicate is not None: # Discard the new event since this membership change is a no-op. defer.returnValue(duplicate) yield msg_handler.handle_new_client_event( requester, event, context, extra_users=[target], ratelimit=ratelimit, ) prev_member_event_id = context.prev_state_ids.get( (EventTypes.Member, target.to_string()), None ) if event.membership == Membership.JOIN: # Only fire user_joined_room if the user has acutally joined the # room. Don't bother if the user is just changing their profile # info. newly_joined = True if prev_member_event_id: prev_member_event = yield self.store.get_event(prev_member_event_id) newly_joined = prev_member_event.membership != Membership.JOIN if newly_joined: yield user_joined_room(self.distributor, target, room_id) elif event.membership == Membership.LEAVE: if prev_member_event_id: prev_member_event = yield self.store.get_event(prev_member_event_id) if prev_member_event.membership == Membership.JOIN: user_left_room(self.distributor, target, room_id) defer.returnValue(event) @defer.inlineCallbacks def remote_join(self, remote_room_hosts, room_id, user, content): if len(remote_room_hosts) == 0: raise SynapseError(404, "No known servers") # We don't do an auth check if we are doing an invite # join dance for now, since we're kinda implicitly checking # that we are allowed to join when we decide whether or not we # need to do the invite/join dance. yield self.hs.get_handlers().federation_handler.do_invite_join( remote_room_hosts, room_id, user.to_string(), content, ) yield user_joined_room(self.distributor, user, room_id) @defer.inlineCallbacks def update_membership( self, requester, target, room_id, action, txn_id=None, remote_room_hosts=None, third_party_signed=None, ratelimit=True, content=None, ): key = (room_id,) with (yield self.member_linearizer.queue(key)): result = yield self._update_membership( requester, target, room_id, action, txn_id=txn_id, remote_room_hosts=remote_room_hosts, third_party_signed=third_party_signed, ratelimit=ratelimit, content=content, ) defer.returnValue(result) @defer.inlineCallbacks def _update_membership( self, requester, target, room_id, action, txn_id=None, remote_room_hosts=None, third_party_signed=None, ratelimit=True, content=None, ): content_specified = bool(content) if content is None: content = {} effective_membership_state = action if action in ["kick", "unban"]: effective_membership_state = "leave" # if this is a join with a 3pid signature, we may need to turn a 3pid # invite into a normal invite before we can handle the join. if third_party_signed is not None: replication = self.hs.get_replication_layer() yield replication.exchange_third_party_invite( third_party_signed["sender"], target.to_string(), room_id, third_party_signed, ) if not remote_room_hosts: remote_room_hosts = [] if effective_membership_state not in ("leave", "ban",): is_blocked = yield self.store.is_room_blocked(room_id) if is_blocked: raise SynapseError(403, "This room has been blocked on this server") if effective_membership_state == "invite": block_invite = False is_requester_admin = yield self.auth.is_server_admin( requester.user, ) if not is_requester_admin: if self.hs.config.block_non_admin_invites: logger.info( "Blocking invite: user is not admin and non-admin " "invites disabled" ) block_invite = True if not self.spam_checker.user_may_invite( requester.user.to_string(), target.to_string(), room_id, ): logger.info("Blocking invite due to spam checker") block_invite = True if block_invite: raise SynapseError( 403, "Invites have been disabled on this server", ) latest_event_ids = yield self.store.get_latest_event_ids_in_room(room_id) current_state_ids = yield self.state_handler.get_current_state_ids( room_id, latest_event_ids=latest_event_ids, ) old_state_id = current_state_ids.get((EventTypes.Member, target.to_string())) if old_state_id: old_state = yield self.store.get_event(old_state_id, allow_none=True) old_membership = old_state.content.get("membership") if old_state else None if action == "unban" and old_membership != "ban": raise SynapseError( 403, "Cannot unban user who was not banned" " (membership=%s)" % old_membership, errcode=Codes.BAD_STATE ) if old_membership == "ban" and action != "unban": raise SynapseError( 403, "Cannot %s user who was banned" % (action,), errcode=Codes.BAD_STATE ) if old_state: same_content = content == old_state.content same_membership = old_membership == effective_membership_state same_sender = requester.user.to_string() == old_state.sender if same_sender and same_membership and same_content: defer.returnValue(old_state) is_host_in_room = yield self._is_host_in_room(current_state_ids) if effective_membership_state == Membership.JOIN: if requester.is_guest: guest_can_join = yield self._can_guest_join(current_state_ids) if not guest_can_join: # This should be an auth check, but guests are a local concept, # so don't really fit into the general auth process. raise AuthError(403, "Guest access not allowed") if not is_host_in_room: inviter = yield self.get_inviter(target.to_string(), room_id) if inviter and not self.hs.is_mine(inviter): remote_room_hosts.append(inviter.domain) content["membership"] = Membership.JOIN profile = self.profile_handler if not content_specified: content["displayname"] = yield profile.get_displayname(target) content["avatar_url"] = yield profile.get_avatar_url(target) if requester.is_guest: content["kind"] = "guest" ret = yield self.remote_join( remote_room_hosts, room_id, target, content ) defer.returnValue(ret) elif effective_membership_state == Membership.LEAVE: if not is_host_in_room: # perhaps we've been invited inviter = yield self.get_inviter(target.to_string(), room_id) if not inviter: raise SynapseError(404, "Not a known room") if self.hs.is_mine(inviter): # the inviter was on our server, but has now left. Carry on # with the normal rejection codepath. # # This is a bit of a hack, because the room might still be # active on other servers. pass else: # send the rejection to the inviter's HS. remote_room_hosts = remote_room_hosts + [inviter.domain] fed_handler = self.hs.get_handlers().federation_handler try: ret = yield fed_handler.do_remotely_reject_invite( remote_room_hosts, room_id, target.to_string(), ) defer.returnValue(ret) except Exception as e: # if we were unable to reject the exception, just mark # it as rejected on our end and plough ahead. # # The 'except' clause is very broad, but we need to # capture everything from DNS failures upwards # logger.warn("Failed to reject invite: %s", e) yield self.store.locally_reject_invite( target.to_string(), room_id ) defer.returnValue({}) res = yield self._local_membership_update( requester=requester, target=target, room_id=room_id, membership=effective_membership_state, txn_id=txn_id, ratelimit=ratelimit, prev_event_ids=latest_event_ids, content=content, ) defer.returnValue(res) @defer.inlineCallbacks def send_membership_event( self, requester, event, context, remote_room_hosts=None, ratelimit=True, ): """ Change the membership status of a user in a room. Args: requester (Requester): The local user who requested the membership event. If None, certain checks, like whether this homeserver can act as the sender, will be skipped. event (SynapseEvent): The membership event. context: The context of the event. is_guest (bool): Whether the sender is a guest. room_hosts ([str]): Homeservers which are likely to already be in the room, and could be danced with in order to join this homeserver for the first time. ratelimit (bool): Whether to rate limit this request. Raises: SynapseError if there was a problem changing the membership. """ remote_room_hosts = remote_room_hosts or [] target_user = UserID.from_string(event.state_key) room_id = event.room_id if requester is not None: sender = UserID.from_string(event.sender) assert sender == requester.user, ( "Sender (%s) must be same as requester (%s)" % (sender, requester.user) ) assert self.hs.is_mine(sender), "Sender must be our own: %s" % (sender,) else: requester = synapse.types.create_requester(target_user) message_handler = self.hs.get_handlers().message_handler prev_event = yield message_handler.deduplicate_state_event(event, context) if prev_event is not None: return if event.membership == Membership.JOIN: if requester.is_guest: guest_can_join = yield self._can_guest_join(context.prev_state_ids) if not guest_can_join: # This should be an auth check, but guests are a local concept, # so don't really fit into the general auth process. raise AuthError(403, "Guest access not allowed") if event.membership not in (Membership.LEAVE, Membership.BAN): is_blocked = yield self.store.is_room_blocked(room_id) if is_blocked: raise SynapseError(403, "This room has been blocked on this server") yield message_handler.handle_new_client_event( requester, event, context, extra_users=[target_user], ratelimit=ratelimit, ) prev_member_event_id = context.prev_state_ids.get( (EventTypes.Member, event.state_key), None ) if event.membership == Membership.JOIN: # Only fire user_joined_room if the user has acutally joined the # room. Don't bother if the user is just changing their profile # info. newly_joined = True if prev_member_event_id: prev_member_event = yield self.store.get_event(prev_member_event_id) newly_joined = prev_member_event.membership != Membership.JOIN if newly_joined: yield user_joined_room(self.distributor, target_user, room_id) elif event.membership == Membership.LEAVE: if prev_member_event_id: prev_member_event = yield self.store.get_event(prev_member_event_id) if prev_member_event.membership == Membership.JOIN: user_left_room(self.distributor, target_user, room_id) @defer.inlineCallbacks def _can_guest_join(self, current_state_ids): """ Returns whether a guest can join a room based on its current state. """ guest_access_id = current_state_ids.get((EventTypes.GuestAccess, ""), None) if not guest_access_id: defer.returnValue(False) guest_access = yield self.store.get_event(guest_access_id) defer.returnValue( guest_access and guest_access.content and "guest_access" in guest_access.content and guest_access.content["guest_access"] == "can_join" ) @defer.inlineCallbacks def lookup_room_alias(self, room_alias): """ Get the room ID associated with a room alias. Args: room_alias (RoomAlias): The alias to look up. Returns: A tuple of: The room ID as a RoomID object. Hosts likely to be participating in the room ([str]). Raises: SynapseError if room alias could not be found. """ directory_handler = self.hs.get_handlers().directory_handler mapping = yield directory_handler.get_association(room_alias) if not mapping: raise SynapseError(404, "No such room alias") room_id = mapping["room_id"] servers = mapping["servers"] defer.returnValue((RoomID.from_string(room_id), servers)) @defer.inlineCallbacks def get_inviter(self, user_id, room_id): invite = yield self.store.get_invite_for_user_in_room( user_id=user_id, room_id=room_id, ) if invite: defer.returnValue(UserID.from_string(invite.sender)) @defer.inlineCallbacks def do_3pid_invite( self, room_id, inviter, medium, address, id_server, requester, txn_id ): if self.hs.config.block_non_admin_invites: is_requester_admin = yield self.auth.is_server_admin( requester.user, ) if not is_requester_admin: raise SynapseError( 403, "Invites have been disabled on this server", Codes.FORBIDDEN, ) invitee = yield self._lookup_3pid( id_server, medium, address ) if invitee: yield self.update_membership( requester, UserID.from_string(invitee), room_id, "invite", txn_id=txn_id, ) else: yield self._make_and_store_3pid_invite( requester, id_server, medium, address, room_id, inviter, txn_id=txn_id ) @defer.inlineCallbacks def _lookup_3pid(self, id_server, medium, address): """Looks up a 3pid in the passed identity server. Args: id_server (str): The server name (including port, if required) of the identity server to use. medium (str): The type of the third party identifier (e.g. "email"). address (str): The third party identifier (e.g. "foo@example.com"). Returns: str: the matrix ID of the 3pid, or None if it is not recognized. """ try: data = yield self.hs.get_simple_http_client().get_json( "%s%s/_matrix/identity/api/v1/lookup" % (id_server_scheme, id_server,), { "medium": medium, "address": address, } ) if "mxid" in data: if "signatures" not in data: raise AuthError(401, "No signatures on 3pid binding") self.verify_any_signature(data, id_server) defer.returnValue(data["mxid"]) except IOError as e: logger.warn("Error from identity server lookup: %s" % (e,)) defer.returnValue(None) @defer.inlineCallbacks def verify_any_signature(self, data, server_hostname): if server_hostname not in data["signatures"]: raise AuthError(401, "No signature from server %s" % (server_hostname,)) for key_name, signature in data["signatures"][server_hostname].items(): key_data = yield self.hs.get_simple_http_client().get_json( "%s%s/_matrix/identity/api/v1/pubkey/%s" % (id_server_scheme, server_hostname, key_name,), ) if "public_key" not in key_data: raise AuthError(401, "No public key named %s from %s" % (key_name, server_hostname,)) verify_signed_json( data, server_hostname, decode_verify_key_bytes(key_name, decode_base64(key_data["public_key"])) ) return @defer.inlineCallbacks def _make_and_store_3pid_invite( self, requester, id_server, medium, address, room_id, user, txn_id ): room_state = yield self.hs.get_state_handler().get_current_state(room_id) inviter_display_name = "" inviter_avatar_url = "" member_event = room_state.get((EventTypes.Member, user.to_string())) if member_event: inviter_display_name = member_event.content.get("displayname", "") inviter_avatar_url = member_event.content.get("avatar_url", "") canonical_room_alias = "" canonical_alias_event = room_state.get((EventTypes.CanonicalAlias, "")) if canonical_alias_event: canonical_room_alias = canonical_alias_event.content.get("alias", "") room_name = "" room_name_event = room_state.get((EventTypes.Name, "")) if room_name_event: room_name = room_name_event.content.get("name", "") room_join_rules = "" join_rules_event = room_state.get((EventTypes.JoinRules, "")) if join_rules_event: room_join_rules = join_rules_event.content.get("join_rule", "") room_avatar_url = "" room_avatar_event = room_state.get((EventTypes.RoomAvatar, "")) if room_avatar_event: room_avatar_url = room_avatar_event.content.get("url", "") token, public_keys, fallback_public_key, display_name = ( yield self._ask_id_server_for_third_party_invite( id_server=id_server, medium=medium, address=address, room_id=room_id, inviter_user_id=user.to_string(), room_alias=canonical_room_alias, room_avatar_url=room_avatar_url, room_join_rules=room_join_rules, room_name=room_name, inviter_display_name=inviter_display_name, inviter_avatar_url=inviter_avatar_url ) ) msg_handler = self.hs.get_handlers().message_handler yield msg_handler.create_and_send_nonmember_event( requester, { "type": EventTypes.ThirdPartyInvite, "content": { "display_name": display_name, "public_keys": public_keys, # For backwards compatibility: "key_validity_url": fallback_public_key["key_validity_url"], "public_key": fallback_public_key["public_key"], }, "room_id": room_id, "sender": user.to_string(), "state_key": token, }, txn_id=txn_id, ) @defer.inlineCallbacks def _ask_id_server_for_third_party_invite( self, id_server, medium, address, room_id, inviter_user_id, room_alias, room_avatar_url, room_join_rules, room_name, inviter_display_name, inviter_avatar_url ): """ Asks an identity server for a third party invite. Args: id_server (str): hostname + optional port for the identity server. medium (str): The literal string "email". address (str): The third party address being invited. room_id (str): The ID of the room to which the user is invited. inviter_user_id (str): The user ID of the inviter. room_alias (str): An alias for the room, for cosmetic notifications. room_avatar_url (str): The URL of the room's avatar, for cosmetic notifications. room_join_rules (str): The join rules of the email (e.g. "public"). room_name (str): The m.room.name of the room. inviter_display_name (str): The current display name of the inviter. inviter_avatar_url (str): The URL of the inviter's avatar. Returns: A deferred tuple containing: token (str): The token which must be signed to prove authenticity. public_keys ([{"public_key": str, "key_validity_url": str}]): public_key is a base64-encoded ed25519 public key. fallback_public_key: One element from public_keys. display_name (str): A user-friendly name to represent the invited user. """ is_url = "%s%s/_matrix/identity/api/v1/store-invite" % ( id_server_scheme, id_server, ) invite_config = { "medium": medium, "address": address, "room_id": room_id, "room_alias": room_alias, "room_avatar_url": room_avatar_url, "room_join_rules": room_join_rules, "room_name": room_name, "sender": inviter_user_id, "sender_display_name": inviter_display_name, "sender_avatar_url": inviter_avatar_url, } if self.hs.config.invite_3pid_guest: registration_handler = self.hs.get_handlers().registration_handler guest_access_token = yield registration_handler.guest_access_token_for( medium=medium, address=address, inviter_user_id=inviter_user_id, ) guest_user_info = yield self.hs.get_auth().get_user_by_access_token( guest_access_token ) invite_config.update({ "guest_access_token": guest_access_token, "guest_user_id": guest_user_info["user"].to_string(), }) data = yield self.hs.get_simple_http_client().post_urlencoded_get_json( is_url, invite_config ) # TODO: Check for success token = data["token"] public_keys = data.get("public_keys", []) if "public_key" in data: fallback_public_key = { "public_key": data["public_key"], "key_validity_url": "%s%s/_matrix/identity/api/v1/pubkey/isvalid" % ( id_server_scheme, id_server, ), } else: fallback_public_key = public_keys[0] if not public_keys: public_keys.append(fallback_public_key) display_name = data["display_name"] defer.returnValue((token, public_keys, fallback_public_key, display_name)) @defer.inlineCallbacks def forget(self, user, room_id): user_id = user.to_string() member = yield self.state_handler.get_current_state( room_id=room_id, event_type=EventTypes.Member, state_key=user_id ) membership = member.membership if member else None if membership is not None and membership not in [ Membership.LEAVE, Membership.BAN ]: raise SynapseError(400, "User %s in room %s" % ( user_id, room_id )) if membership: yield self.store.forget(user_id, room_id) @defer.inlineCallbacks def _is_host_in_room(self, current_state_ids): # Have we just created the room, and is this about to be the very # first member event? create_event_id = current_state_ids.get(("m.room.create", "")) if len(current_state_ids) == 1 and create_event_id: defer.returnValue(self.hs.is_mine_id(create_event_id)) for etype, state_key in current_state_ids: if etype != EventTypes.Member or not self.hs.is_mine_id(state_key): continue event_id = current_state_ids[(etype, state_key)] event = yield self.store.get_event(event_id, allow_none=True) if not event: continue if event.membership == Membership.JOIN: defer.returnValue(True) defer.returnValue(False) synapse-0.24.0/synapse/handlers/search.py000066400000000000000000000327531317335640100203750ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from ._base import BaseHandler from synapse.api.constants import Membership, EventTypes from synapse.api.filtering import Filter from synapse.api.errors import SynapseError from synapse.events.utils import serialize_event from synapse.visibility import filter_events_for_client from unpaddedbase64 import decode_base64, encode_base64 import itertools import logging logger = logging.getLogger(__name__) class SearchHandler(BaseHandler): def __init__(self, hs): super(SearchHandler, self).__init__(hs) @defer.inlineCallbacks def search(self, user, content, batch=None): """Performs a full text search for a user. Args: user (UserID) content (dict): Search parameters batch (str): The next_batch parameter. Used for pagination. Returns: dict to be returned to the client with results of search """ batch_group = None batch_group_key = None batch_token = None if batch: try: b = decode_base64(batch) batch_group, batch_group_key, batch_token = b.split("\n") assert batch_group is not None assert batch_group_key is not None assert batch_token is not None except: raise SynapseError(400, "Invalid batch") try: room_cat = content["search_categories"]["room_events"] # The actual thing to query in FTS search_term = room_cat["search_term"] # Which "keys" to search over in FTS query keys = room_cat.get("keys", [ "content.body", "content.name", "content.topic", ]) # Filter to apply to results filter_dict = room_cat.get("filter", {}) # What to order results by (impacts whether pagination can be doen) order_by = room_cat.get("order_by", "rank") # Return the current state of the rooms? include_state = room_cat.get("include_state", False) # Include context around each event? event_context = room_cat.get( "event_context", None ) # Group results together? May allow clients to paginate within a # group group_by = room_cat.get("groupings", {}).get("group_by", {}) group_keys = [g["key"] for g in group_by] if event_context is not None: before_limit = int(event_context.get( "before_limit", 5 )) after_limit = int(event_context.get( "after_limit", 5 )) # Return the historic display name and avatar for the senders # of the events? include_profile = bool(event_context.get("include_profile", False)) except KeyError: raise SynapseError(400, "Invalid search query") if order_by not in ("rank", "recent"): raise SynapseError(400, "Invalid order by: %r" % (order_by,)) if set(group_keys) - {"room_id", "sender"}: raise SynapseError( 400, "Invalid group by keys: %r" % (set(group_keys) - {"room_id", "sender"},) ) search_filter = Filter(filter_dict) # TODO: Search through left rooms too rooms = yield self.store.get_rooms_for_user_where_membership_is( user.to_string(), membership_list=[Membership.JOIN], # membership_list=[Membership.JOIN, Membership.LEAVE, Membership.Ban], ) room_ids = set(r.room_id for r in rooms) room_ids = search_filter.filter_rooms(room_ids) if batch_group == "room_id": room_ids.intersection_update({batch_group_key}) if not room_ids: defer.returnValue({ "search_categories": { "room_events": { "results": [], "count": 0, "highlights": [], } } }) rank_map = {} # event_id -> rank of event allowed_events = [] room_groups = {} # Holds result of grouping by room, if applicable sender_group = {} # Holds result of grouping by sender, if applicable # Holds the next_batch for the entire result set if one of those exists global_next_batch = None highlights = set() count = None if order_by == "rank": search_result = yield self.store.search_msgs( room_ids, search_term, keys ) count = search_result["count"] if search_result["highlights"]: highlights.update(search_result["highlights"]) results = search_result["results"] results_map = {r["event"].event_id: r for r in results} rank_map.update({r["event"].event_id: r["rank"] for r in results}) filtered_events = search_filter.filter([r["event"] for r in results]) events = yield filter_events_for_client( self.store, user.to_string(), filtered_events ) events.sort(key=lambda e: -rank_map[e.event_id]) allowed_events = events[:search_filter.limit()] for e in allowed_events: rm = room_groups.setdefault(e.room_id, { "results": [], "order": rank_map[e.event_id], }) rm["results"].append(e.event_id) s = sender_group.setdefault(e.sender, { "results": [], "order": rank_map[e.event_id], }) s["results"].append(e.event_id) elif order_by == "recent": room_events = [] i = 0 pagination_token = batch_token # We keep looping and we keep filtering until we reach the limit # or we run out of things. # But only go around 5 times since otherwise synapse will be sad. while len(room_events) < search_filter.limit() and i < 5: i += 1 search_result = yield self.store.search_rooms( room_ids, search_term, keys, search_filter.limit() * 2, pagination_token=pagination_token, ) if search_result["highlights"]: highlights.update(search_result["highlights"]) count = search_result["count"] results = search_result["results"] results_map = {r["event"].event_id: r for r in results} rank_map.update({r["event"].event_id: r["rank"] for r in results}) filtered_events = search_filter.filter([ r["event"] for r in results ]) events = yield filter_events_for_client( self.store, user.to_string(), filtered_events ) room_events.extend(events) room_events = room_events[:search_filter.limit()] if len(results) < search_filter.limit() * 2: pagination_token = None break else: pagination_token = results[-1]["pagination_token"] for event in room_events: group = room_groups.setdefault(event.room_id, { "results": [], }) group["results"].append(event.event_id) if room_events and len(room_events) >= search_filter.limit(): last_event_id = room_events[-1].event_id pagination_token = results_map[last_event_id]["pagination_token"] # We want to respect the given batch group and group keys so # that if people blindly use the top level `next_batch` token # it returns more from the same group (if applicable) rather # than reverting to searching all results again. if batch_group and batch_group_key: global_next_batch = encode_base64("%s\n%s\n%s" % ( batch_group, batch_group_key, pagination_token )) else: global_next_batch = encode_base64("%s\n%s\n%s" % ( "all", "", pagination_token )) for room_id, group in room_groups.items(): group["next_batch"] = encode_base64("%s\n%s\n%s" % ( "room_id", room_id, pagination_token )) allowed_events.extend(room_events) else: # We should never get here due to the guard earlier. raise NotImplementedError() # If client has asked for "context" for each event (i.e. some surrounding # events and state), fetch that if event_context is not None: now_token = yield self.hs.get_event_sources().get_current_token() contexts = {} for event in allowed_events: res = yield self.store.get_events_around( event.room_id, event.event_id, before_limit, after_limit ) res["events_before"] = yield filter_events_for_client( self.store, user.to_string(), res["events_before"] ) res["events_after"] = yield filter_events_for_client( self.store, user.to_string(), res["events_after"] ) res["start"] = now_token.copy_and_replace( "room_key", res["start"] ).to_string() res["end"] = now_token.copy_and_replace( "room_key", res["end"] ).to_string() if include_profile: senders = set( ev.sender for ev in itertools.chain( res["events_before"], [event], res["events_after"] ) ) if res["events_after"]: last_event_id = res["events_after"][-1].event_id else: last_event_id = event.event_id state = yield self.store.get_state_for_event( last_event_id, types=[(EventTypes.Member, sender) for sender in senders] ) res["profile_info"] = { s.state_key: { "displayname": s.content.get("displayname", None), "avatar_url": s.content.get("avatar_url", None), } for s in state.values() if s.type == EventTypes.Member and s.state_key in senders } contexts[event.event_id] = res else: contexts = {} # TODO: Add a limit time_now = self.clock.time_msec() for context in contexts.values(): context["events_before"] = [ serialize_event(e, time_now) for e in context["events_before"] ] context["events_after"] = [ serialize_event(e, time_now) for e in context["events_after"] ] state_results = {} if include_state: rooms = set(e.room_id for e in allowed_events) for room_id in rooms: state = yield self.state_handler.get_current_state(room_id) state_results[room_id] = state.values() state_results.values() # We're now about to serialize the events. We should not make any # blocking calls after this. Otherwise the 'age' will be wrong results = [ { "rank": rank_map[e.event_id], "result": serialize_event(e, time_now), "context": contexts.get(e.event_id, {}), } for e in allowed_events ] rooms_cat_res = { "results": results, "count": count, "highlights": list(highlights), } if state_results: rooms_cat_res["state"] = { room_id: [serialize_event(e, time_now) for e in state] for room_id, state in state_results.items() } if room_groups and "room_id" in group_keys: rooms_cat_res.setdefault("groups", {})["room_id"] = room_groups if sender_group and "sender" in group_keys: rooms_cat_res.setdefault("groups", {})["sender"] = sender_group if global_next_batch: rooms_cat_res["next_batch"] = global_next_batch defer.returnValue({ "search_categories": { "room_events": rooms_cat_res } }) synapse-0.24.0/synapse/handlers/sync.py000066400000000000000000001551711317335640100201040ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015 - 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.api.constants import Membership, EventTypes from synapse.util.async import concurrently_execute from synapse.util.logcontext import LoggingContext from synapse.util.metrics import Measure, measure_func from synapse.util.caches.response_cache import ResponseCache from synapse.push.clientformat import format_push_rules_for_user from synapse.visibility import filter_events_for_client from synapse.types import RoomStreamToken from twisted.internet import defer import collections import logging import itertools logger = logging.getLogger(__name__) SyncConfig = collections.namedtuple("SyncConfig", [ "user", "filter_collection", "is_guest", "request_key", "device_id", ]) class TimelineBatch(collections.namedtuple("TimelineBatch", [ "prev_batch", "events", "limited", ])): __slots__ = [] def __nonzero__(self): """Make the result appear empty if there are no updates. This is used to tell if room needs to be part of the sync result. """ return bool(self.events) class JoinedSyncResult(collections.namedtuple("JoinedSyncResult", [ "room_id", # str "timeline", # TimelineBatch "state", # dict[(str, str), FrozenEvent] "ephemeral", "account_data", "unread_notifications", ])): __slots__ = [] def __nonzero__(self): """Make the result appear empty if there are no updates. This is used to tell if room needs to be part of the sync result. """ return bool( self.timeline or self.state or self.ephemeral or self.account_data # nb the notification count does not, er, count: if there's nothing # else in the result, we don't need to send it. ) class ArchivedSyncResult(collections.namedtuple("ArchivedSyncResult", [ "room_id", # str "timeline", # TimelineBatch "state", # dict[(str, str), FrozenEvent] "account_data", ])): __slots__ = [] def __nonzero__(self): """Make the result appear empty if there are no updates. This is used to tell if room needs to be part of the sync result. """ return bool( self.timeline or self.state or self.account_data ) class InvitedSyncResult(collections.namedtuple("InvitedSyncResult", [ "room_id", # str "invite", # FrozenEvent: the invite event ])): __slots__ = [] def __nonzero__(self): """Invited rooms should always be reported to the client""" return True class GroupsSyncResult(collections.namedtuple("GroupsSyncResult", [ "join", "invite", "leave", ])): __slots__ = [] def __nonzero__(self): return bool(self.join or self.invite or self.leave) class DeviceLists(collections.namedtuple("DeviceLists", [ "changed", # list of user_ids whose devices may have changed "left", # list of user_ids whose devices we no longer track ])): __slots__ = [] def __nonzero__(self): return bool(self.changed or self.left) class SyncResult(collections.namedtuple("SyncResult", [ "next_batch", # Token for the next sync "presence", # List of presence events for the user. "account_data", # List of account_data events for the user. "joined", # JoinedSyncResult for each joined room. "invited", # InvitedSyncResult for each invited room. "archived", # ArchivedSyncResult for each archived room. "to_device", # List of direct messages for the device. "device_lists", # List of user_ids whose devices have chanegd "device_one_time_keys_count", # Dict of algorithm to count for one time keys # for this device "groups", ])): __slots__ = [] def __nonzero__(self): """Make the result appear empty if there are no updates. This is used to tell if the notifier needs to wait for more events when polling for events. """ return bool( self.presence or self.joined or self.invited or self.archived or self.account_data or self.to_device or self.device_lists or self.groups ) class SyncHandler(object): def __init__(self, hs): self.store = hs.get_datastore() self.notifier = hs.get_notifier() self.presence_handler = hs.get_presence_handler() self.event_sources = hs.get_event_sources() self.clock = hs.get_clock() self.response_cache = ResponseCache(hs) self.state = hs.get_state_handler() def wait_for_sync_for_user(self, sync_config, since_token=None, timeout=0, full_state=False): """Get the sync for a client if we have new data for it now. Otherwise wait for new data to arrive on the server. If the timeout expires, then return an empty sync result. Returns: A Deferred SyncResult. """ result = self.response_cache.get(sync_config.request_key) if not result: result = self.response_cache.set( sync_config.request_key, self._wait_for_sync_for_user( sync_config, since_token, timeout, full_state ) ) return result @defer.inlineCallbacks def _wait_for_sync_for_user(self, sync_config, since_token, timeout, full_state): context = LoggingContext.current_context() if context: if since_token is None: context.tag = "initial_sync" elif full_state: context.tag = "full_state_sync" else: context.tag = "incremental_sync" if timeout == 0 or since_token is None or full_state: # we are going to return immediately, so don't bother calling # notifier.wait_for_events. result = yield self.current_sync_for_user( sync_config, since_token, full_state=full_state, ) defer.returnValue(result) else: def current_sync_callback(before_token, after_token): return self.current_sync_for_user(sync_config, since_token) result = yield self.notifier.wait_for_events( sync_config.user.to_string(), timeout, current_sync_callback, from_token=since_token, ) defer.returnValue(result) def current_sync_for_user(self, sync_config, since_token=None, full_state=False): """Get the sync for client needed to match what the server has now. Returns: A Deferred SyncResult. """ return self.generate_sync_result(sync_config, since_token, full_state) @defer.inlineCallbacks def push_rules_for_user(self, user): user_id = user.to_string() rules = yield self.store.get_push_rules_for_user(user_id) rules = format_push_rules_for_user(user, rules) defer.returnValue(rules) @defer.inlineCallbacks def ephemeral_by_room(self, sync_config, now_token, since_token=None): """Get the ephemeral events for each room the user is in Args: sync_config (SyncConfig): The flags, filters and user for the sync. now_token (StreamToken): Where the server is currently up to. since_token (StreamToken): Where the server was when the client last synced. Returns: A tuple of the now StreamToken, updated to reflect the which typing events are included, and a dict mapping from room_id to a list of typing events for that room. """ with Measure(self.clock, "ephemeral_by_room"): typing_key = since_token.typing_key if since_token else "0" room_ids = yield self.store.get_rooms_for_user(sync_config.user.to_string()) typing_source = self.event_sources.sources["typing"] typing, typing_key = yield typing_source.get_new_events( user=sync_config.user, from_key=typing_key, limit=sync_config.filter_collection.ephemeral_limit(), room_ids=room_ids, is_guest=sync_config.is_guest, ) now_token = now_token.copy_and_replace("typing_key", typing_key) ephemeral_by_room = {} for event in typing: # we want to exclude the room_id from the event, but modifying the # result returned by the event source is poor form (it might cache # the object) room_id = event["room_id"] event_copy = {k: v for (k, v) in event.iteritems() if k != "room_id"} ephemeral_by_room.setdefault(room_id, []).append(event_copy) receipt_key = since_token.receipt_key if since_token else "0" receipt_source = self.event_sources.sources["receipt"] receipts, receipt_key = yield receipt_source.get_new_events( user=sync_config.user, from_key=receipt_key, limit=sync_config.filter_collection.ephemeral_limit(), room_ids=room_ids, is_guest=sync_config.is_guest, ) now_token = now_token.copy_and_replace("receipt_key", receipt_key) for event in receipts: room_id = event["room_id"] # exclude room id, as above event_copy = {k: v for (k, v) in event.iteritems() if k != "room_id"} ephemeral_by_room.setdefault(room_id, []).append(event_copy) defer.returnValue((now_token, ephemeral_by_room)) @defer.inlineCallbacks def _load_filtered_recents(self, room_id, sync_config, now_token, since_token=None, recents=None, newly_joined_room=False): """ Returns: a Deferred TimelineBatch """ with Measure(self.clock, "load_filtered_recents"): timeline_limit = sync_config.filter_collection.timeline_limit() block_all_timeline = sync_config.filter_collection.blocks_all_room_timeline() if recents is None or newly_joined_room or timeline_limit < len(recents): limited = True else: limited = False if recents: recents = sync_config.filter_collection.filter_room_timeline(recents) # We check if there are any state events, if there are then we pass # all current state events to the filter_events function. This is to # ensure that we always include current state in the timeline current_state_ids = frozenset() if any(e.is_state() for e in recents): current_state_ids = yield self.state.get_current_state_ids(room_id) current_state_ids = frozenset(current_state_ids.itervalues()) recents = yield filter_events_for_client( self.store, sync_config.user.to_string(), recents, always_include_ids=current_state_ids, ) else: recents = [] if not limited or block_all_timeline: defer.returnValue(TimelineBatch( events=recents, prev_batch=now_token, limited=False )) filtering_factor = 2 load_limit = max(timeline_limit * filtering_factor, 10) max_repeat = 5 # Only try a few times per room, otherwise room_key = now_token.room_key end_key = room_key since_key = None if since_token and not newly_joined_room: since_key = since_token.room_key while limited and len(recents) < timeline_limit and max_repeat: events, end_key = yield self.store.get_room_events_stream_for_room( room_id, limit=load_limit + 1, from_key=since_key, to_key=end_key, ) loaded_recents = sync_config.filter_collection.filter_room_timeline( events ) # We check if there are any state events, if there are then we pass # all current state events to the filter_events function. This is to # ensure that we always include current state in the timeline current_state_ids = frozenset() if any(e.is_state() for e in loaded_recents): current_state_ids = yield self.state.get_current_state_ids(room_id) current_state_ids = frozenset(current_state_ids.itervalues()) loaded_recents = yield filter_events_for_client( self.store, sync_config.user.to_string(), loaded_recents, always_include_ids=current_state_ids, ) loaded_recents.extend(recents) recents = loaded_recents if len(events) <= load_limit: limited = False break max_repeat -= 1 if len(recents) > timeline_limit: limited = True recents = recents[-timeline_limit:] room_key = recents[0].internal_metadata.before prev_batch_token = now_token.copy_and_replace( "room_key", room_key ) defer.returnValue(TimelineBatch( events=recents, prev_batch=prev_batch_token, limited=limited or newly_joined_room )) @defer.inlineCallbacks def get_state_after_event(self, event): """ Get the room state after the given event Args: event(synapse.events.EventBase): event of interest Returns: A Deferred map from ((type, state_key)->Event) """ state_ids = yield self.store.get_state_ids_for_event(event.event_id) if event.is_state(): state_ids = state_ids.copy() state_ids[(event.type, event.state_key)] = event.event_id defer.returnValue(state_ids) @defer.inlineCallbacks def get_state_at(self, room_id, stream_position): """ Get the room state at a particular stream position Args: room_id(str): room for which to get state stream_position(StreamToken): point at which to get state Returns: A Deferred map from ((type, state_key)->Event) """ last_events, token = yield self.store.get_recent_events_for_room( room_id, end_token=stream_position.room_key, limit=1, ) if last_events: last_event = last_events[-1] state = yield self.get_state_after_event(last_event) else: # no events in this room - so presumably no state state = {} defer.returnValue(state) @defer.inlineCallbacks def compute_state_delta(self, room_id, batch, sync_config, since_token, now_token, full_state): """ Works out the differnce in state between the start of the timeline and the previous sync. Args: room_id(str): batch(synapse.handlers.sync.TimelineBatch): The timeline batch for the room that will be sent to the user. sync_config(synapse.handlers.sync.SyncConfig): since_token(str|None): Token of the end of the previous batch. May be None. now_token(str): Token of the end of the current batch. full_state(bool): Whether to force returning the full state. Returns: A deferred new event dictionary """ # TODO(mjark) Check if the state events were received by the server # after the previous sync, since we need to include those state # updates even if they occured logically before the previous event. # TODO(mjark) Check for new redactions in the state events. with Measure(self.clock, "compute_state_delta"): if full_state: if batch: current_state_ids = yield self.store.get_state_ids_for_event( batch.events[-1].event_id ) state_ids = yield self.store.get_state_ids_for_event( batch.events[0].event_id ) else: current_state_ids = yield self.get_state_at( room_id, stream_position=now_token ) state_ids = current_state_ids timeline_state = { (event.type, event.state_key): event.event_id for event in batch.events if event.is_state() } state_ids = _calculate_state( timeline_contains=timeline_state, timeline_start=state_ids, previous={}, current=current_state_ids, ) elif batch.limited: state_at_previous_sync = yield self.get_state_at( room_id, stream_position=since_token ) current_state_ids = yield self.store.get_state_ids_for_event( batch.events[-1].event_id ) state_at_timeline_start = yield self.store.get_state_ids_for_event( batch.events[0].event_id ) timeline_state = { (event.type, event.state_key): event.event_id for event in batch.events if event.is_state() } state_ids = _calculate_state( timeline_contains=timeline_state, timeline_start=state_at_timeline_start, previous=state_at_previous_sync, current=current_state_ids, ) else: state_ids = {} state = {} if state_ids: state = yield self.store.get_events(state_ids.values()) defer.returnValue({ (e.type, e.state_key): e for e in sync_config.filter_collection.filter_room_state(state.values()) }) @defer.inlineCallbacks def unread_notifs_for_room_id(self, room_id, sync_config): with Measure(self.clock, "unread_notifs_for_room_id"): last_unread_event_id = yield self.store.get_last_receipt_event_id_for_user( user_id=sync_config.user.to_string(), room_id=room_id, receipt_type="m.read" ) notifs = [] if last_unread_event_id: notifs = yield self.store.get_unread_event_push_actions_by_room_for_user( room_id, sync_config.user.to_string(), last_unread_event_id ) defer.returnValue(notifs) # There is no new information in this period, so your notification # count is whatever it was last time. defer.returnValue(None) @defer.inlineCallbacks def generate_sync_result(self, sync_config, since_token=None, full_state=False): """Generates a sync result. Args: sync_config (SyncConfig) since_token (StreamToken) full_state (bool) Returns: Deferred(SyncResult) """ logger.info("Calculating sync response for %r", sync_config.user) # NB: The now_token gets changed by some of the generate_sync_* methods, # this is due to some of the underlying streams not supporting the ability # to query up to a given point. # Always use the `now_token` in `SyncResultBuilder` now_token = yield self.event_sources.get_current_token() sync_result_builder = SyncResultBuilder( sync_config, full_state, since_token=since_token, now_token=now_token, ) account_data_by_room = yield self._generate_sync_entry_for_account_data( sync_result_builder ) res = yield self._generate_sync_entry_for_rooms( sync_result_builder, account_data_by_room ) newly_joined_rooms, newly_joined_users, _, _ = res _, _, newly_left_rooms, newly_left_users = res block_all_presence_data = ( since_token is None and sync_config.filter_collection.blocks_all_presence() ) if not block_all_presence_data: yield self._generate_sync_entry_for_presence( sync_result_builder, newly_joined_rooms, newly_joined_users ) yield self._generate_sync_entry_for_to_device(sync_result_builder) device_lists = yield self._generate_sync_entry_for_device_list( sync_result_builder, newly_joined_rooms=newly_joined_rooms, newly_joined_users=newly_joined_users, newly_left_rooms=newly_left_rooms, newly_left_users=newly_left_users, ) device_id = sync_config.device_id one_time_key_counts = {} if device_id: user_id = sync_config.user.to_string() one_time_key_counts = yield self.store.count_e2e_one_time_keys( user_id, device_id ) yield self._generate_sync_entry_for_groups(sync_result_builder) defer.returnValue(SyncResult( presence=sync_result_builder.presence, account_data=sync_result_builder.account_data, joined=sync_result_builder.joined, invited=sync_result_builder.invited, archived=sync_result_builder.archived, to_device=sync_result_builder.to_device, device_lists=device_lists, groups=sync_result_builder.groups, device_one_time_keys_count=one_time_key_counts, next_batch=sync_result_builder.now_token, )) @measure_func("_generate_sync_entry_for_groups") @defer.inlineCallbacks def _generate_sync_entry_for_groups(self, sync_result_builder): user_id = sync_result_builder.sync_config.user.to_string() since_token = sync_result_builder.since_token now_token = sync_result_builder.now_token if since_token and since_token.groups_key: results = yield self.store.get_groups_changes_for_user( user_id, since_token.groups_key, now_token.groups_key, ) else: results = yield self.store.get_all_groups_for_user( user_id, now_token.groups_key, ) invited = {} joined = {} left = {} for result in results: membership = result["membership"] group_id = result["group_id"] gtype = result["type"] content = result["content"] if membership == "join": if gtype == "membership": # TODO: Add profile content.pop("membership", None) joined[group_id] = content["content"] else: joined.setdefault(group_id, {})[gtype] = content elif membership == "invite": if gtype == "membership": content.pop("membership", None) invited[group_id] = content["content"] else: if gtype == "membership": left[group_id] = content["content"] sync_result_builder.groups = GroupsSyncResult( join=joined, invite=invited, leave=left, ) @measure_func("_generate_sync_entry_for_device_list") @defer.inlineCallbacks def _generate_sync_entry_for_device_list(self, sync_result_builder, newly_joined_rooms, newly_joined_users, newly_left_rooms, newly_left_users): user_id = sync_result_builder.sync_config.user.to_string() since_token = sync_result_builder.since_token if since_token and since_token.device_list_key: changed = yield self.store.get_user_whose_devices_changed( since_token.device_list_key ) # TODO: Be more clever than this, i.e. remove users who we already # share a room with? for room_id in newly_joined_rooms: joined_users = yield self.state.get_current_user_in_room(room_id) newly_joined_users.update(joined_users) for room_id in newly_left_rooms: left_users = yield self.state.get_current_user_in_room(room_id) newly_left_users.update(left_users) # TODO: Check that these users are actually new, i.e. either they # weren't in the previous sync *or* they left and rejoined. changed.update(newly_joined_users) if not changed and not newly_left_users: defer.returnValue(DeviceLists( changed=[], left=newly_left_users, )) users_who_share_room = yield self.store.get_users_who_share_room_with_user( user_id ) defer.returnValue(DeviceLists( changed=users_who_share_room & changed, left=set(newly_left_users) - users_who_share_room, )) else: defer.returnValue(DeviceLists( changed=[], left=[], )) @defer.inlineCallbacks def _generate_sync_entry_for_to_device(self, sync_result_builder): """Generates the portion of the sync response. Populates `sync_result_builder` with the result. Args: sync_result_builder(SyncResultBuilder) Returns: Deferred(dict): A dictionary containing the per room account data. """ user_id = sync_result_builder.sync_config.user.to_string() device_id = sync_result_builder.sync_config.device_id now_token = sync_result_builder.now_token since_stream_id = 0 if sync_result_builder.since_token is not None: since_stream_id = int(sync_result_builder.since_token.to_device_key) if since_stream_id != int(now_token.to_device_key): # We only delete messages when a new message comes in, but that's # fine so long as we delete them at some point. deleted = yield self.store.delete_messages_for_device( user_id, device_id, since_stream_id ) logger.debug("Deleted %d to-device messages up to %d", deleted, since_stream_id) messages, stream_id = yield self.store.get_new_messages_for_device( user_id, device_id, since_stream_id, now_token.to_device_key ) logger.debug( "Returning %d to-device messages between %d and %d (current token: %d)", len(messages), since_stream_id, stream_id, now_token.to_device_key ) sync_result_builder.now_token = now_token.copy_and_replace( "to_device_key", stream_id ) sync_result_builder.to_device = messages else: sync_result_builder.to_device = [] @defer.inlineCallbacks def _generate_sync_entry_for_account_data(self, sync_result_builder): """Generates the account data portion of the sync response. Populates `sync_result_builder` with the result. Args: sync_result_builder(SyncResultBuilder) Returns: Deferred(dict): A dictionary containing the per room account data. """ sync_config = sync_result_builder.sync_config user_id = sync_result_builder.sync_config.user.to_string() since_token = sync_result_builder.since_token if since_token and not sync_result_builder.full_state: account_data, account_data_by_room = ( yield self.store.get_updated_account_data_for_user( user_id, since_token.account_data_key, ) ) push_rules_changed = yield self.store.have_push_rules_changed_for_user( user_id, int(since_token.push_rules_key) ) if push_rules_changed: account_data["m.push_rules"] = yield self.push_rules_for_user( sync_config.user ) else: account_data, account_data_by_room = ( yield self.store.get_account_data_for_user( sync_config.user.to_string() ) ) account_data['m.push_rules'] = yield self.push_rules_for_user( sync_config.user ) account_data_for_user = sync_config.filter_collection.filter_account_data([ {"type": account_data_type, "content": content} for account_data_type, content in account_data.items() ]) sync_result_builder.account_data = account_data_for_user defer.returnValue(account_data_by_room) @defer.inlineCallbacks def _generate_sync_entry_for_presence(self, sync_result_builder, newly_joined_rooms, newly_joined_users): """Generates the presence portion of the sync response. Populates the `sync_result_builder` with the result. Args: sync_result_builder(SyncResultBuilder) newly_joined_rooms(list): List of rooms that the user has joined since the last sync (or empty if an initial sync) newly_joined_users(list): List of users that have joined rooms since the last sync (or empty if an initial sync) """ now_token = sync_result_builder.now_token sync_config = sync_result_builder.sync_config user = sync_result_builder.sync_config.user presence_source = self.event_sources.sources["presence"] since_token = sync_result_builder.since_token if since_token and not sync_result_builder.full_state: presence_key = since_token.presence_key include_offline = True else: presence_key = None include_offline = False presence, presence_key = yield presence_source.get_new_events( user=user, from_key=presence_key, is_guest=sync_config.is_guest, include_offline=include_offline, ) sync_result_builder.now_token = now_token.copy_and_replace( "presence_key", presence_key ) extra_users_ids = set(newly_joined_users) for room_id in newly_joined_rooms: users = yield self.state.get_current_user_in_room(room_id) extra_users_ids.update(users) extra_users_ids.discard(user.to_string()) if extra_users_ids: states = yield self.presence_handler.get_states( extra_users_ids, ) presence.extend(states) # Deduplicate the presence entries so that there's at most one per user presence = {p.user_id: p for p in presence}.values() presence = sync_config.filter_collection.filter_presence( presence ) sync_result_builder.presence = presence @defer.inlineCallbacks def _generate_sync_entry_for_rooms(self, sync_result_builder, account_data_by_room): """Generates the rooms portion of the sync response. Populates the `sync_result_builder` with the result. Args: sync_result_builder(SyncResultBuilder) account_data_by_room(dict): Dictionary of per room account data Returns: Deferred(tuple): Returns a 4-tuple of `(newly_joined_rooms, newly_joined_users, newly_left_rooms, newly_left_users)` """ user_id = sync_result_builder.sync_config.user.to_string() block_all_room_ephemeral = ( sync_result_builder.since_token is None and sync_result_builder.sync_config.filter_collection.blocks_all_room_ephemeral() ) if block_all_room_ephemeral: ephemeral_by_room = {} else: now_token, ephemeral_by_room = yield self.ephemeral_by_room( sync_result_builder.sync_config, now_token=sync_result_builder.now_token, since_token=sync_result_builder.since_token, ) sync_result_builder.now_token = now_token # We check up front if anything has changed, if it hasn't then there is # no point in going futher. since_token = sync_result_builder.since_token if not sync_result_builder.full_state: if since_token and not ephemeral_by_room and not account_data_by_room: have_changed = yield self._have_rooms_changed(sync_result_builder) if not have_changed: tags_by_room = yield self.store.get_updated_tags( user_id, since_token.account_data_key, ) if not tags_by_room: logger.debug("no-oping sync") defer.returnValue(([], [], [], [])) ignored_account_data = yield self.store.get_global_account_data_by_type_for_user( "m.ignored_user_list", user_id=user_id, ) if ignored_account_data: ignored_users = ignored_account_data.get("ignored_users", {}).keys() else: ignored_users = frozenset() if since_token: res = yield self._get_rooms_changed(sync_result_builder, ignored_users) room_entries, invited, newly_joined_rooms, newly_left_rooms = res tags_by_room = yield self.store.get_updated_tags( user_id, since_token.account_data_key, ) else: res = yield self._get_all_rooms(sync_result_builder, ignored_users) room_entries, invited, newly_joined_rooms = res newly_left_rooms = [] tags_by_room = yield self.store.get_tags_for_user(user_id) def handle_room_entries(room_entry): return self._generate_room_entry( sync_result_builder, ignored_users, room_entry, ephemeral=ephemeral_by_room.get(room_entry.room_id, []), tags=tags_by_room.get(room_entry.room_id), account_data=account_data_by_room.get(room_entry.room_id, {}), always_include=sync_result_builder.full_state, ) yield concurrently_execute(handle_room_entries, room_entries, 10) sync_result_builder.invited.extend(invited) # Now we want to get any newly joined users newly_joined_users = set() newly_left_users = set() if since_token: for joined_sync in sync_result_builder.joined: it = itertools.chain( joined_sync.timeline.events, joined_sync.state.itervalues() ) for event in it: if event.type == EventTypes.Member: if event.membership == Membership.JOIN: newly_joined_users.add(event.state_key) else: prev_content = event.unsigned.get("prev_content", {}) prev_membership = prev_content.get("membership", None) if prev_membership == Membership.JOIN: newly_left_users.add(event.state_key) newly_left_users -= newly_joined_users defer.returnValue(( newly_joined_rooms, newly_joined_users, newly_left_rooms, newly_left_users, )) @defer.inlineCallbacks def _have_rooms_changed(self, sync_result_builder): """Returns whether there may be any new events that should be sent down the sync. Returns True if there are. """ user_id = sync_result_builder.sync_config.user.to_string() since_token = sync_result_builder.since_token now_token = sync_result_builder.now_token assert since_token # Get a list of membership change events that have happened. rooms_changed = yield self.store.get_membership_changes_for_user( user_id, since_token.room_key, now_token.room_key ) if rooms_changed: defer.returnValue(True) app_service = self.store.get_app_service_by_user_id(user_id) if app_service: rooms = yield self.store.get_app_service_rooms(app_service) joined_room_ids = set(r.room_id for r in rooms) else: joined_room_ids = yield self.store.get_rooms_for_user(user_id) stream_id = RoomStreamToken.parse_stream_token(since_token.room_key).stream for room_id in joined_room_ids: if self.store.has_room_changed_since(room_id, stream_id): defer.returnValue(True) defer.returnValue(False) @defer.inlineCallbacks def _get_rooms_changed(self, sync_result_builder, ignored_users): """Gets the the changes that have happened since the last sync. Args: sync_result_builder(SyncResultBuilder) ignored_users(set(str)): Set of users ignored by user. Returns: Deferred(tuple): Returns a tuple of the form: `([RoomSyncResultBuilder], [InvitedSyncResult], newly_joined_rooms)` """ user_id = sync_result_builder.sync_config.user.to_string() since_token = sync_result_builder.since_token now_token = sync_result_builder.now_token sync_config = sync_result_builder.sync_config assert since_token app_service = self.store.get_app_service_by_user_id(user_id) if app_service: rooms = yield self.store.get_app_service_rooms(app_service) joined_room_ids = set(r.room_id for r in rooms) else: joined_room_ids = yield self.store.get_rooms_for_user(user_id) # Get a list of membership change events that have happened. rooms_changed = yield self.store.get_membership_changes_for_user( user_id, since_token.room_key, now_token.room_key ) mem_change_events_by_room_id = {} for event in rooms_changed: mem_change_events_by_room_id.setdefault(event.room_id, []).append(event) newly_joined_rooms = [] newly_left_rooms = [] room_entries = [] invited = [] for room_id, events in mem_change_events_by_room_id.iteritems(): non_joins = [e for e in events if e.membership != Membership.JOIN] has_join = len(non_joins) != len(events) # We want to figure out if we joined the room at some point since # the last sync (even if we have since left). This is to make sure # we do send down the room, and with full state, where necessary old_state_ids = None if room_id in joined_room_ids and non_joins: # Always include if the user (re)joined the room, especially # important so that device list changes are calculated correctly. # If there are non join member events, but we are still in the room, # then the user must have left and joined newly_joined_rooms.append(room_id) # User is in the room so we don't need to do the invite/leave checks continue if room_id in joined_room_ids or has_join: old_state_ids = yield self.get_state_at(room_id, since_token) old_mem_ev_id = old_state_ids.get((EventTypes.Member, user_id), None) old_mem_ev = None if old_mem_ev_id: old_mem_ev = yield self.store.get_event( old_mem_ev_id, allow_none=True ) if not old_mem_ev or old_mem_ev.membership != Membership.JOIN: newly_joined_rooms.append(room_id) # If user is in the room then we don't need to do the invite/leave checks if room_id in joined_room_ids: continue if not non_joins: continue # Check if we have left the room. This can either be because we were # joined before *or* that we since joined and then left. if events[-1].membership != Membership.JOIN: if has_join: newly_left_rooms.append(room_id) else: if not old_state_ids: old_state_ids = yield self.get_state_at(room_id, since_token) old_mem_ev_id = old_state_ids.get( (EventTypes.Member, user_id), None, ) old_mem_ev = None if old_mem_ev_id: old_mem_ev = yield self.store.get_event( old_mem_ev_id, allow_none=True ) if old_mem_ev and old_mem_ev.membership == Membership.JOIN: newly_left_rooms.append(room_id) # Only bother if we're still currently invited should_invite = non_joins[-1].membership == Membership.INVITE if should_invite: if event.sender not in ignored_users: room_sync = InvitedSyncResult(room_id, invite=non_joins[-1]) if room_sync: invited.append(room_sync) # Always include leave/ban events. Just take the last one. # TODO: How do we handle ban -> leave in same batch? leave_events = [ e for e in non_joins if e.membership in (Membership.LEAVE, Membership.BAN) ] if leave_events: leave_event = leave_events[-1] leave_stream_token = yield self.store.get_stream_token_for_event( leave_event.event_id ) leave_token = since_token.copy_and_replace( "room_key", leave_stream_token ) if since_token and since_token.is_after(leave_token): continue room_entries.append(RoomSyncResultBuilder( room_id=room_id, rtype="archived", events=None, newly_joined=room_id in newly_joined_rooms, full_state=False, since_token=since_token, upto_token=leave_token, )) timeline_limit = sync_config.filter_collection.timeline_limit() # Get all events for rooms we're currently joined to. room_to_events = yield self.store.get_room_events_stream_for_rooms( room_ids=joined_room_ids, from_key=since_token.room_key, to_key=now_token.room_key, limit=timeline_limit + 1, ) # We loop through all room ids, even if there are no new events, in case # there are non room events taht we need to notify about. for room_id in joined_room_ids: room_entry = room_to_events.get(room_id, None) if room_entry: events, start_key = room_entry prev_batch_token = now_token.copy_and_replace("room_key", start_key) room_entries.append(RoomSyncResultBuilder( room_id=room_id, rtype="joined", events=events, newly_joined=room_id in newly_joined_rooms, full_state=False, since_token=None if room_id in newly_joined_rooms else since_token, upto_token=prev_batch_token, )) else: room_entries.append(RoomSyncResultBuilder( room_id=room_id, rtype="joined", events=[], newly_joined=room_id in newly_joined_rooms, full_state=False, since_token=since_token, upto_token=since_token, )) defer.returnValue((room_entries, invited, newly_joined_rooms, newly_left_rooms)) @defer.inlineCallbacks def _get_all_rooms(self, sync_result_builder, ignored_users): """Returns entries for all rooms for the user. Args: sync_result_builder(SyncResultBuilder) ignored_users(set(str)): Set of users ignored by user. Returns: Deferred(tuple): Returns a tuple of the form: `([RoomSyncResultBuilder], [InvitedSyncResult], [])` """ user_id = sync_result_builder.sync_config.user.to_string() since_token = sync_result_builder.since_token now_token = sync_result_builder.now_token sync_config = sync_result_builder.sync_config membership_list = ( Membership.INVITE, Membership.JOIN, Membership.LEAVE, Membership.BAN ) room_list = yield self.store.get_rooms_for_user_where_membership_is( user_id=user_id, membership_list=membership_list ) room_entries = [] invited = [] for event in room_list: if event.membership == Membership.JOIN: room_entries.append(RoomSyncResultBuilder( room_id=event.room_id, rtype="joined", events=None, newly_joined=False, full_state=True, since_token=since_token, upto_token=now_token, )) elif event.membership == Membership.INVITE: if event.sender in ignored_users: continue invite = yield self.store.get_event(event.event_id) invited.append(InvitedSyncResult( room_id=event.room_id, invite=invite, )) elif event.membership in (Membership.LEAVE, Membership.BAN): # Always send down rooms we were banned or kicked from. if not sync_config.filter_collection.include_leave: if event.membership == Membership.LEAVE: if user_id == event.sender: continue leave_token = now_token.copy_and_replace( "room_key", "s%d" % (event.stream_ordering,) ) room_entries.append(RoomSyncResultBuilder( room_id=event.room_id, rtype="archived", events=None, newly_joined=False, full_state=True, since_token=since_token, upto_token=leave_token, )) defer.returnValue((room_entries, invited, [])) @defer.inlineCallbacks def _generate_room_entry(self, sync_result_builder, ignored_users, room_builder, ephemeral, tags, account_data, always_include=False): """Populates the `joined` and `archived` section of `sync_result_builder` based on the `room_builder`. Args: sync_result_builder(SyncResultBuilder) ignored_users(set(str)): Set of users ignored by user. room_builder(RoomSyncResultBuilder) ephemeral(list): List of new ephemeral events for room tags(list): List of *all* tags for room, or None if there has been no change. account_data(list): List of new account data for room always_include(bool): Always include this room in the sync response, even if empty. """ newly_joined = room_builder.newly_joined full_state = ( room_builder.full_state or newly_joined or sync_result_builder.full_state ) events = room_builder.events # We want to shortcut out as early as possible. if not (always_include or account_data or ephemeral or full_state): if events == [] and tags is None: return since_token = sync_result_builder.since_token now_token = sync_result_builder.now_token sync_config = sync_result_builder.sync_config room_id = room_builder.room_id since_token = room_builder.since_token upto_token = room_builder.upto_token batch = yield self._load_filtered_recents( room_id, sync_config, now_token=upto_token, since_token=since_token, recents=events, newly_joined_room=newly_joined, ) account_data_events = [] if tags is not None: account_data_events.append({ "type": "m.tag", "content": {"tags": tags}, }) for account_data_type, content in account_data.items(): account_data_events.append({ "type": account_data_type, "content": content, }) account_data = sync_config.filter_collection.filter_room_account_data( account_data_events ) ephemeral = sync_config.filter_collection.filter_room_ephemeral(ephemeral) if not (always_include or batch or account_data or ephemeral or full_state): return state = yield self.compute_state_delta( room_id, batch, sync_config, since_token, now_token, full_state=full_state ) if room_builder.rtype == "joined": unread_notifications = {} room_sync = JoinedSyncResult( room_id=room_id, timeline=batch, state=state, ephemeral=ephemeral, account_data=account_data_events, unread_notifications=unread_notifications, ) if room_sync or always_include: notifs = yield self.unread_notifs_for_room_id( room_id, sync_config ) if notifs is not None: unread_notifications["notification_count"] = notifs["notify_count"] unread_notifications["highlight_count"] = notifs["highlight_count"] sync_result_builder.joined.append(room_sync) elif room_builder.rtype == "archived": room_sync = ArchivedSyncResult( room_id=room_id, timeline=batch, state=state, account_data=account_data, ) if room_sync or always_include: sync_result_builder.archived.append(room_sync) else: raise Exception("Unrecognized rtype: %r", room_builder.rtype) def _action_has_highlight(actions): for action in actions: try: if action.get("set_tweak", None) == "highlight": return action.get("value", True) except AttributeError: pass return False def _calculate_state(timeline_contains, timeline_start, previous, current): """Works out what state to include in a sync response. Args: timeline_contains (dict): state in the timeline timeline_start (dict): state at the start of the timeline previous (dict): state at the end of the previous sync (or empty dict if this is an initial sync) current (dict): state at the end of the timeline Returns: dict """ event_id_to_key = { e: key for key, e in itertools.chain( timeline_contains.items(), previous.items(), timeline_start.items(), current.items(), ) } c_ids = set(e for e in current.values()) tc_ids = set(e for e in timeline_contains.values()) p_ids = set(e for e in previous.values()) ts_ids = set(e for e in timeline_start.values()) state_ids = ((c_ids | ts_ids) - p_ids) - tc_ids return { event_id_to_key[e]: e for e in state_ids } class SyncResultBuilder(object): "Used to help build up a new SyncResult for a user" def __init__(self, sync_config, full_state, since_token, now_token): """ Args: sync_config(SyncConfig) full_state(bool): The full_state flag as specified by user since_token(StreamToken): The token supplied by user, or None. now_token(StreamToken): The token to sync up to. """ self.sync_config = sync_config self.full_state = full_state self.since_token = since_token self.now_token = now_token self.presence = [] self.account_data = [] self.joined = [] self.invited = [] self.archived = [] self.device = [] self.groups = None self.to_device = [] class RoomSyncResultBuilder(object): """Stores information needed to create either a `JoinedSyncResult` or `ArchivedSyncResult`. """ def __init__(self, room_id, rtype, events, newly_joined, full_state, since_token, upto_token): """ Args: room_id(str) rtype(str): One of `"joined"` or `"archived"` events(list): List of events to include in the room, (more events may be added when generating result). newly_joined(bool): If the user has newly joined the room full_state(bool): Whether the full state should be sent in result since_token(StreamToken): Earliest point to return events from, or None upto_token(StreamToken): Latest point to return events from. """ self.room_id = room_id self.rtype = rtype self.events = events self.newly_joined = newly_joined self.full_state = full_state self.since_token = since_token self.upto_token = upto_token synapse-0.24.0/synapse/handlers/typing.py000066400000000000000000000254531317335640100204410ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.errors import SynapseError, AuthError from synapse.util.logcontext import preserve_fn from synapse.util.metrics import Measure from synapse.util.wheel_timer import WheelTimer from synapse.types import UserID, get_domain_from_id import logging from collections import namedtuple logger = logging.getLogger(__name__) # A tiny object useful for storing a user's membership in a room, as a mapping # key RoomMember = namedtuple("RoomMember", ("room_id", "user_id")) # How often we expect remote servers to resend us presence. FEDERATION_TIMEOUT = 60 * 1000 # How often to resend typing across federation. FEDERATION_PING_INTERVAL = 40 * 1000 class TypingHandler(object): def __init__(self, hs): self.store = hs.get_datastore() self.server_name = hs.config.server_name self.auth = hs.get_auth() self.is_mine_id = hs.is_mine_id self.notifier = hs.get_notifier() self.state = hs.get_state_handler() self.hs = hs self.clock = hs.get_clock() self.wheel_timer = WheelTimer(bucket_size=5000) self.federation = hs.get_federation_sender() hs.get_replication_layer().register_edu_handler("m.typing", self._recv_edu) hs.get_distributor().observe("user_left_room", self.user_left_room) self._member_typing_until = {} # clock time we expect to stop self._member_last_federation_poke = {} # map room IDs to serial numbers self._room_serials = {} self._latest_room_serial = 0 # map room IDs to sets of users currently typing self._room_typing = {} self.clock.looping_call( self._handle_timeouts, 5000, ) def _handle_timeouts(self): logger.info("Checking for typing timeouts") now = self.clock.time_msec() members = set(self.wheel_timer.fetch(now)) for member in members: if not self.is_typing(member): # Nothing to do if they're no longer typing continue until = self._member_typing_until.get(member, None) if not until or until <= now: logger.info("Timing out typing for: %s", member.user_id) self._stopped_typing(member) continue # Check if we need to resend a keep alive over federation for this # user. if self.hs.is_mine_id(member.user_id): last_fed_poke = self._member_last_federation_poke.get(member, None) if not last_fed_poke or last_fed_poke + FEDERATION_PING_INTERVAL <= now: preserve_fn(self._push_remote)( member=member, typing=True ) # Add a paranoia timer to ensure that we always have a timer for # each person typing. self.wheel_timer.insert( now=now, obj=member, then=now + 60 * 1000, ) def is_typing(self, member): return member.user_id in self._room_typing.get(member.room_id, []) @defer.inlineCallbacks def started_typing(self, target_user, auth_user, room_id, timeout): target_user_id = target_user.to_string() auth_user_id = auth_user.to_string() if not self.is_mine_id(target_user_id): raise SynapseError(400, "User is not hosted on this Home Server") if target_user_id != auth_user_id: raise AuthError(400, "Cannot set another user's typing state") yield self.auth.check_joined_room(room_id, target_user_id) logger.debug( "%s has started typing in %s", target_user_id, room_id ) member = RoomMember(room_id=room_id, user_id=target_user_id) was_present = member.user_id in self._room_typing.get(room_id, set()) now = self.clock.time_msec() self._member_typing_until[member] = now + timeout self.wheel_timer.insert( now=now, obj=member, then=now + timeout, ) if was_present: # No point sending another notification defer.returnValue(None) self._push_update( member=member, typing=True, ) @defer.inlineCallbacks def stopped_typing(self, target_user, auth_user, room_id): target_user_id = target_user.to_string() auth_user_id = auth_user.to_string() if not self.is_mine_id(target_user_id): raise SynapseError(400, "User is not hosted on this Home Server") if target_user_id != auth_user_id: raise AuthError(400, "Cannot set another user's typing state") yield self.auth.check_joined_room(room_id, target_user_id) logger.debug( "%s has stopped typing in %s", target_user_id, room_id ) member = RoomMember(room_id=room_id, user_id=target_user_id) self._stopped_typing(member) @defer.inlineCallbacks def user_left_room(self, user, room_id): user_id = user.to_string() if self.is_mine_id(user_id): member = RoomMember(room_id=room_id, user_id=user_id) yield self._stopped_typing(member) def _stopped_typing(self, member): if member.user_id not in self._room_typing.get(member.room_id, set()): # No point defer.returnValue(None) self._member_typing_until.pop(member, None) self._member_last_federation_poke.pop(member, None) self._push_update( member=member, typing=False, ) def _push_update(self, member, typing): if self.hs.is_mine_id(member.user_id): # Only send updates for changes to our own users. preserve_fn(self._push_remote)(member, typing) self._push_update_local( member=member, typing=typing ) @defer.inlineCallbacks def _push_remote(self, member, typing): users = yield self.state.get_current_user_in_room(member.room_id) self._member_last_federation_poke[member] = self.clock.time_msec() now = self.clock.time_msec() self.wheel_timer.insert( now=now, obj=member, then=now + FEDERATION_PING_INTERVAL, ) for domain in set(get_domain_from_id(u) for u in users): if domain != self.server_name: self.federation.send_edu( destination=domain, edu_type="m.typing", content={ "room_id": member.room_id, "user_id": member.user_id, "typing": typing, }, key=member, ) @defer.inlineCallbacks def _recv_edu(self, origin, content): room_id = content["room_id"] user_id = content["user_id"] member = RoomMember(user_id=user_id, room_id=room_id) # Check that the string is a valid user id user = UserID.from_string(user_id) if user.domain != origin: logger.info( "Got typing update from %r with bad 'user_id': %r", origin, user_id, ) return users = yield self.state.get_current_user_in_room(room_id) domains = set(get_domain_from_id(u) for u in users) if self.server_name in domains: logger.info("Got typing update from %s: %r", user_id, content) now = self.clock.time_msec() self._member_typing_until[member] = now + FEDERATION_TIMEOUT self.wheel_timer.insert( now=now, obj=member, then=now + FEDERATION_TIMEOUT, ) self._push_update_local( member=member, typing=content["typing"] ) def _push_update_local(self, member, typing): room_set = self._room_typing.setdefault(member.room_id, set()) if typing: room_set.add(member.user_id) else: room_set.discard(member.user_id) self._latest_room_serial += 1 self._room_serials[member.room_id] = self._latest_room_serial self.notifier.on_new_event( "typing_key", self._latest_room_serial, rooms=[member.room_id] ) def get_all_typing_updates(self, last_id, current_id): # TODO: Work out a way to do this without scanning the entire state. if last_id == current_id: return [] rows = [] for room_id, serial in self._room_serials.items(): if last_id < serial and serial <= current_id: typing = self._room_typing[room_id] rows.append((serial, room_id, list(typing))) rows.sort() return rows def get_current_token(self): return self._latest_room_serial class TypingNotificationEventSource(object): def __init__(self, hs): self.hs = hs self.clock = hs.get_clock() # We can't call get_typing_handler here because there's a cycle: # # Typing -> Notifier -> TypingNotificationEventSource -> Typing # self.get_typing_handler = hs.get_typing_handler def _make_event_for(self, room_id): typing = self.get_typing_handler()._room_typing[room_id] return { "type": "m.typing", "room_id": room_id, "content": { "user_ids": list(typing), }, } def get_new_events(self, from_key, room_ids, **kwargs): with Measure(self.clock, "typing.get_new_events"): from_key = int(from_key) handler = self.get_typing_handler() events = [] for room_id in room_ids: if room_id not in handler._room_serials: continue if handler._room_serials[room_id] <= from_key: continue events.append(self._make_event_for(room_id)) return events, handler._latest_room_serial def get_current_key(self): return self.get_typing_handler()._latest_room_serial def get_pagination_rows(self, user, pagination_config, key): return ([], pagination_config.from_key) synapse-0.24.0/synapse/handlers/user_directory.py000066400000000000000000000605371317335640100221730ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from twisted.internet import defer from synapse.api.constants import EventTypes, JoinRules, Membership from synapse.storage.roommember import ProfileInfo from synapse.util.metrics import Measure from synapse.util.async import sleep logger = logging.getLogger(__name__) class UserDirectoyHandler(object): """Handles querying of and keeping updated the user_directory. N.B.: ASSUMES IT IS THE ONLY THING THAT MODIFIES THE USER DIRECTORY The user directory is filled with users who this server can see are joined to a world_readable or publically joinable room. We keep a database table up to date by streaming changes of the current state and recalculating whether users should be in the directory or not when necessary. For each user in the directory we also store a room_id which is public and that the user is joined to. This allows us to ignore history_visibility and join_rules changes for that user in all other public rooms, as we know they'll still be in at least one public room. """ INITIAL_SLEEP_MS = 50 INITIAL_SLEEP_COUNT = 100 INITIAL_BATCH_SIZE = 100 def __init__(self, hs): self.store = hs.get_datastore() self.state = hs.get_state_handler() self.server_name = hs.hostname self.clock = hs.get_clock() self.notifier = hs.get_notifier() self.is_mine_id = hs.is_mine_id self.update_user_directory = hs.config.update_user_directory # When start up for the first time we need to populate the user_directory. # This is a set of user_id's we've inserted already self.initially_handled_users = set() self.initially_handled_users_in_public = set() self.initially_handled_users_share = set() self.initially_handled_users_share_private_room = set() # The current position in the current_state_delta stream self.pos = None # Guard to ensure we only process deltas one at a time self._is_processing = False if self.update_user_directory: self.notifier.add_replication_callback(self.notify_new_event) # We kick this off so that we don't have to wait for a change before # we start populating the user directory self.clock.call_later(0, self.notify_new_event) def search_users(self, user_id, search_term, limit): """Searches for users in directory Returns: dict of the form:: { "limited": , # whether there were more results or not "results": [ # Ordered by best match first { "user_id": , "display_name": , "avatar_url": } ] } """ return self.store.search_user_dir(user_id, search_term, limit) @defer.inlineCallbacks def notify_new_event(self): """Called when there may be more deltas to process """ if not self.update_user_directory: return if self._is_processing: return self._is_processing = True try: yield self._unsafe_process() finally: self._is_processing = False @defer.inlineCallbacks def _unsafe_process(self): # If self.pos is None then means we haven't fetched it from DB if self.pos is None: self.pos = yield self.store.get_user_directory_stream_pos() # If still None then we need to do the initial fill of directory if self.pos is None: yield self._do_initial_spam() self.pos = yield self.store.get_user_directory_stream_pos() # Loop round handling deltas until we're up to date while True: with Measure(self.clock, "user_dir_delta"): deltas = yield self.store.get_current_state_deltas(self.pos) if not deltas: return logger.info("Handling %d state deltas", len(deltas)) yield self._handle_deltas(deltas) self.pos = deltas[-1]["stream_id"] yield self.store.update_user_directory_stream_pos(self.pos) @defer.inlineCallbacks def _do_initial_spam(self): """Populates the user_directory from the current state of the DB, used when synapse first starts with user_directory support """ new_pos = yield self.store.get_max_stream_id_in_current_state_deltas() # Delete any existing entries just in case there are any yield self.store.delete_all_from_user_dir() # We process by going through each existing room at a time. room_ids = yield self.store.get_all_rooms() logger.info("Doing initial update of user directory. %d rooms", len(room_ids)) num_processed_rooms = 1 for room_id in room_ids: logger.info("Handling room %d/%d", num_processed_rooms, len(room_ids)) yield self._handle_intial_room(room_id) num_processed_rooms += 1 yield sleep(self.INITIAL_SLEEP_MS / 1000.) logger.info("Processed all rooms.") self.initially_handled_users = None self.initially_handled_users_in_public = None self.initially_handled_users_share = None self.initially_handled_users_share_private_room = None yield self.store.update_user_directory_stream_pos(new_pos) @defer.inlineCallbacks def _handle_intial_room(self, room_id): """Called when we initially fill out user_directory one room at a time """ is_in_room = yield self.store.is_host_joined(room_id, self.server_name) if not is_in_room: return is_public = yield self.store.is_room_world_readable_or_publicly_joinable(room_id) users_with_profile = yield self.state.get_current_user_in_room(room_id) user_ids = set(users_with_profile) unhandled_users = user_ids - self.initially_handled_users yield self.store.add_profiles_to_user_dir( room_id, { user_id: users_with_profile[user_id] for user_id in unhandled_users } ) self.initially_handled_users |= unhandled_users if is_public: yield self.store.add_users_to_public_room( room_id, user_ids=user_ids - self.initially_handled_users_in_public ) self.initially_handled_users_in_public |= user_ids # We now go and figure out the new users who share rooms with user entries # We sleep aggressively here as otherwise it can starve resources. # We also batch up inserts/updates, but try to avoid too many at once. to_insert = set() to_update = set() count = 0 for user_id in user_ids: if count % self.INITIAL_SLEEP_COUNT == 0: yield sleep(self.INITIAL_SLEEP_MS / 1000.) if not self.is_mine_id(user_id): count += 1 continue if self.store.get_if_app_services_interested_in_user(user_id): count += 1 continue for other_user_id in user_ids: if user_id == other_user_id: continue if count % self.INITIAL_SLEEP_COUNT == 0: yield sleep(self.INITIAL_SLEEP_MS / 1000.) count += 1 user_set = (user_id, other_user_id) if user_set in self.initially_handled_users_share_private_room: continue if user_set in self.initially_handled_users_share: if is_public: continue to_update.add(user_set) else: to_insert.add(user_set) if is_public: self.initially_handled_users_share.add(user_set) else: self.initially_handled_users_share_private_room.add(user_set) if len(to_insert) > self.INITIAL_BATCH_SIZE: yield self.store.add_users_who_share_room( room_id, not is_public, to_insert, ) to_insert.clear() if len(to_update) > self.INITIAL_BATCH_SIZE: yield self.store.update_users_who_share_room( room_id, not is_public, to_update, ) to_update.clear() if to_insert: yield self.store.add_users_who_share_room( room_id, not is_public, to_insert, ) to_insert.clear() if to_update: yield self.store.update_users_who_share_room( room_id, not is_public, to_update, ) to_update.clear() @defer.inlineCallbacks def _handle_deltas(self, deltas): """Called with the state deltas to process """ for delta in deltas: typ = delta["type"] state_key = delta["state_key"] room_id = delta["room_id"] event_id = delta["event_id"] prev_event_id = delta["prev_event_id"] logger.debug("Handling: %r %r, %s", typ, state_key, event_id) # For join rule and visibility changes we need to check if the room # may have become public or not and add/remove the users in said room if typ in (EventTypes.RoomHistoryVisibility, EventTypes.JoinRules): yield self._handle_room_publicity_change( room_id, prev_event_id, event_id, typ, ) elif typ == EventTypes.Member: change = yield self._get_key_change( prev_event_id, event_id, key_name="membership", public_value=Membership.JOIN, ) if change is None: # Handle any profile changes yield self._handle_profile_change( state_key, room_id, prev_event_id, event_id, ) continue if not change: # Need to check if the server left the room entirely, if so # we might need to remove all the users in that room is_in_room = yield self.store.is_host_joined( room_id, self.server_name, ) if not is_in_room: logger.info("Server left room: %r", room_id) # Fetch all the users that we marked as being in user # directory due to being in the room and then check if # need to remove those users or not user_ids = yield self.store.get_users_in_dir_due_to_room(room_id) for user_id in user_ids: yield self._handle_remove_user(room_id, user_id) return else: logger.debug("Server is still in room: %r", room_id) if change: # The user joined event = yield self.store.get_event(event_id, allow_none=True) profile = ProfileInfo( avatar_url=event.content.get("avatar_url"), display_name=event.content.get("displayname"), ) yield self._handle_new_user(room_id, state_key, profile) else: # The user left yield self._handle_remove_user(room_id, state_key) else: logger.debug("Ignoring irrelevant type: %r", typ) @defer.inlineCallbacks def _handle_room_publicity_change(self, room_id, prev_event_id, event_id, typ): """Handle a room having potentially changed from/to world_readable/publically joinable. Args: room_id (str) prev_event_id (str|None): The previous event before the state change event_id (str|None): The new event after the state change typ (str): Type of the event """ logger.debug("Handling change for %s: %s", typ, room_id) if typ == EventTypes.RoomHistoryVisibility: change = yield self._get_key_change( prev_event_id, event_id, key_name="history_visibility", public_value="world_readable", ) elif typ == EventTypes.JoinRules: change = yield self._get_key_change( prev_event_id, event_id, key_name="join_rule", public_value=JoinRules.PUBLIC, ) else: raise Exception("Invalid event type") # If change is None, no change. True => become world_readable/public, # False => was world_readable/public if change is None: logger.debug("No change") return # There's been a change to or from being world readable. is_public = yield self.store.is_room_world_readable_or_publicly_joinable( room_id ) logger.debug("Change: %r, is_public: %r", change, is_public) if change and not is_public: # If we became world readable but room isn't currently public then # we ignore the change return elif not change and is_public: # If we stopped being world readable but are still public, # ignore the change return if change: users_with_profile = yield self.state.get_current_user_in_room(room_id) for user_id, profile in users_with_profile.iteritems(): yield self._handle_new_user(room_id, user_id, profile) else: users = yield self.store.get_users_in_public_due_to_room(room_id) for user_id in users: yield self._handle_remove_user(room_id, user_id) @defer.inlineCallbacks def _handle_new_user(self, room_id, user_id, profile): """Called when we might need to add user to directory Args: room_id (str): room_id that user joined or started being public that user_id (str) """ logger.debug("Adding user to dir, %r", user_id) row = yield self.store.get_user_in_directory(user_id) if not row: yield self.store.add_profiles_to_user_dir(room_id, {user_id: profile}) is_public = yield self.store.is_room_world_readable_or_publicly_joinable( room_id ) if is_public: row = yield self.store.get_user_in_public_room(user_id) if not row: yield self.store.add_users_to_public_room(room_id, [user_id]) else: logger.debug("Not adding user to public dir, %r", user_id) # Now we update users who share rooms with users. We do this by getting # all the current users in the room and seeing which aren't already # marked in the database as sharing with `user_id` users_with_profile = yield self.state.get_current_user_in_room(room_id) to_insert = set() to_update = set() is_appservice = self.store.get_if_app_services_interested_in_user(user_id) # First, if they're our user then we need to update for every user if self.is_mine_id(user_id) and not is_appservice: # Returns a map of other_user_id -> shared_private. We only need # to update mappings if for users that either don't share a room # already (aren't in the map) or, if the room is private, those that # only share a public room. user_ids_shared = yield self.store.get_users_who_share_room_from_dir( user_id ) for other_user_id in users_with_profile: if user_id == other_user_id: continue shared_is_private = user_ids_shared.get(other_user_id) if shared_is_private is True: # We've already marked in the database they share a private room continue elif shared_is_private is False: # They already share a public room, so only update if this is # a private room if not is_public: to_update.add((user_id, other_user_id)) elif shared_is_private is None: # This is the first time they both share a room to_insert.add((user_id, other_user_id)) # Next we need to update for every local user in the room for other_user_id in users_with_profile: if user_id == other_user_id: continue is_appservice = self.store.get_if_app_services_interested_in_user( other_user_id ) if self.is_mine_id(other_user_id) and not is_appservice: shared_is_private = yield self.store.get_if_users_share_a_room( other_user_id, user_id, ) if shared_is_private is True: # We've already marked in the database they share a private room continue elif shared_is_private is False: # They already share a public room, so only update if this is # a private room if not is_public: to_update.add((other_user_id, user_id)) elif shared_is_private is None: # This is the first time they both share a room to_insert.add((other_user_id, user_id)) if to_insert: yield self.store.add_users_who_share_room( room_id, not is_public, to_insert, ) if to_update: yield self.store.update_users_who_share_room( room_id, not is_public, to_update, ) @defer.inlineCallbacks def _handle_remove_user(self, room_id, user_id): """Called when we might need to remove user to directory Args: room_id (str): room_id that user left or stopped being public that user_id (str) """ logger.debug("Maybe removing user %r", user_id) row = yield self.store.get_user_in_directory(user_id) update_user_dir = row and row["room_id"] == room_id row = yield self.store.get_user_in_public_room(user_id) update_user_in_public = row and row["room_id"] == room_id if (update_user_in_public or update_user_dir): # XXX: Make this faster? rooms = yield self.store.get_rooms_for_user(user_id) for j_room_id in rooms: if (not update_user_in_public and not update_user_dir): break is_in_room = yield self.store.is_host_joined( j_room_id, self.server_name, ) if not is_in_room: continue if update_user_dir: update_user_dir = False yield self.store.update_user_in_user_dir(user_id, j_room_id) is_public = yield self.store.is_room_world_readable_or_publicly_joinable( j_room_id ) if update_user_in_public and is_public: yield self.store.update_user_in_public_user_list(user_id, j_room_id) update_user_in_public = False if update_user_dir: yield self.store.remove_from_user_dir(user_id) elif update_user_in_public: yield self.store.remove_from_user_in_public_room(user_id) # Now handle users_who_share_rooms. # Get a list of user tuples that were in the DB due to this room and # users (this includes tuples where the other user matches `user_id`) user_tuples = yield self.store.get_users_in_share_dir_with_room_id( user_id, room_id, ) for user_id, other_user_id in user_tuples: # For each user tuple get a list of rooms that they still share, # trying to find a private room, and update the entry in the DB rooms = yield self.store.get_rooms_in_common_for_users(user_id, other_user_id) # If they dont share a room anymore, remove the mapping if not rooms: yield self.store.remove_user_who_share_room( user_id, other_user_id, ) continue found_public_share = None for j_room_id in rooms: is_public = yield self.store.is_room_world_readable_or_publicly_joinable( j_room_id ) if is_public: found_public_share = j_room_id else: found_public_share = None yield self.store.update_users_who_share_room( room_id, not is_public, [(user_id, other_user_id)], ) break if found_public_share: yield self.store.update_users_who_share_room( room_id, not is_public, [(user_id, other_user_id)], ) @defer.inlineCallbacks def _handle_profile_change(self, user_id, room_id, prev_event_id, event_id): """Check member event changes for any profile changes and update the database if there are. """ if not prev_event_id or not event_id: return prev_event = yield self.store.get_event(prev_event_id, allow_none=True) event = yield self.store.get_event(event_id, allow_none=True) if not prev_event or not event: return if event.membership != Membership.JOIN: return prev_name = prev_event.content.get("displayname") new_name = event.content.get("displayname") prev_avatar = prev_event.content.get("avatar_url") new_avatar = event.content.get("avatar_url") if prev_name != new_name or prev_avatar != new_avatar: yield self.store.update_profile_in_user_dir( user_id, new_name, new_avatar, room_id, ) @defer.inlineCallbacks def _get_key_change(self, prev_event_id, event_id, key_name, public_value): """Given two events check if the `key_name` field in content changed from not matching `public_value` to doing so. For example, check if `history_visibility` (`key_name`) changed from `shared` to `world_readable` (`public_value`). Returns: None if the field in the events either both match `public_value` or if neither do, i.e. there has been no change. True if it didnt match `public_value` but now does False if it did match `public_value` but now doesn't """ prev_event = None event = None if prev_event_id: prev_event = yield self.store.get_event(prev_event_id, allow_none=True) if event_id: event = yield self.store.get_event(event_id, allow_none=True) if not event and not prev_event: logger.debug("Neither event exists: %r %r", prev_event_id, event_id) defer.returnValue(None) prev_value = None value = None if prev_event: prev_value = prev_event.content.get(key_name) if event: value = event.content.get(key_name) logger.debug("prev_value: %r -> value: %r", prev_value, value) if value == public_value and prev_value != public_value: defer.returnValue(True) elif value != public_value and prev_value == public_value: defer.returnValue(False) else: defer.returnValue(None) synapse-0.24.0/synapse/http/000077500000000000000000000000001317335640100157235ustar00rootroot00000000000000synapse-0.24.0/synapse/http/__init__.py000066400000000000000000000011371317335640100200360ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. synapse-0.24.0/synapse/http/client.py000066400000000000000000000422221317335640100175550ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from OpenSSL import SSL from OpenSSL.SSL import VERIFY_NONE from synapse.api.errors import ( CodeMessageException, MatrixCodeMessageException, SynapseError, Codes, ) from synapse.util.logcontext import preserve_context_over_fn from synapse.util import logcontext import synapse.metrics from synapse.http.endpoint import SpiderEndpoint from canonicaljson import encode_canonical_json from twisted.internet import defer, reactor, ssl, protocol, task from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS from twisted.web.client import ( BrowserLikeRedirectAgent, ContentDecoderAgent, GzipDecoder, Agent, readBody, PartialDownloadError, ) from twisted.web.client import FileBodyProducer as TwistedFileBodyProducer from twisted.web.http import PotentialDataLoss from twisted.web.http_headers import Headers from twisted.web._newclient import ResponseDone from StringIO import StringIO import simplejson as json import logging import urllib logger = logging.getLogger(__name__) metrics = synapse.metrics.get_metrics_for(__name__) outgoing_requests_counter = metrics.register_counter( "requests", labels=["method"], ) incoming_responses_counter = metrics.register_counter( "responses", labels=["method", "code"], ) class SimpleHttpClient(object): """ A simple, no-frills HTTP client with methods that wrap up common ways of using HTTP in Matrix """ def __init__(self, hs): self.hs = hs # The default context factory in Twisted 14.0.0 (which we require) is # BrowserLikePolicyForHTTPS which will do regular cert validation # 'like a browser' self.agent = Agent( reactor, connectTimeout=15, contextFactory=hs.get_http_client_context_factory() ) self.user_agent = hs.version_string self.clock = hs.get_clock() if hs.config.user_agent_suffix: self.user_agent = "%s %s" % (self.user_agent, hs.config.user_agent_suffix,) @defer.inlineCallbacks def request(self, method, uri, *args, **kwargs): # A small wrapper around self.agent.request() so we can easily attach # counters to it outgoing_requests_counter.inc(method) def send_request(): request_deferred = self.agent.request( method, uri, *args, **kwargs ) return self.clock.time_bound_deferred( request_deferred, time_out=60, ) logger.info("Sending request %s %s", method, uri) try: with logcontext.PreserveLoggingContext(): response = yield send_request() incoming_responses_counter.inc(method, response.code) logger.info( "Received response to %s %s: %s", method, uri, response.code ) defer.returnValue(response) except Exception as e: incoming_responses_counter.inc(method, "ERR") logger.info( "Error sending request to %s %s: %s %s", method, uri, type(e).__name__, e.message ) raise e @defer.inlineCallbacks def post_urlencoded_get_json(self, uri, args={}): # TODO: Do we ever want to log message contents? logger.debug("post_urlencoded_get_json args: %s", args) query_bytes = urllib.urlencode(encode_urlencode_args(args), True) response = yield self.request( "POST", uri.encode("ascii"), headers=Headers({ b"Content-Type": [b"application/x-www-form-urlencoded"], b"User-Agent": [self.user_agent], }), bodyProducer=FileBodyProducer(StringIO(query_bytes)) ) body = yield preserve_context_over_fn(readBody, response) defer.returnValue(json.loads(body)) @defer.inlineCallbacks def post_json_get_json(self, uri, post_json): json_str = encode_canonical_json(post_json) logger.debug("HTTP POST %s -> %s", json_str, uri) response = yield self.request( "POST", uri.encode("ascii"), headers=Headers({ b"Content-Type": [b"application/json"], b"User-Agent": [self.user_agent], }), bodyProducer=FileBodyProducer(StringIO(json_str)) ) body = yield preserve_context_over_fn(readBody, response) if 200 <= response.code < 300: defer.returnValue(json.loads(body)) else: raise self._exceptionFromFailedRequest(response, body) defer.returnValue(json.loads(body)) @defer.inlineCallbacks def get_json(self, uri, args={}): """ Gets some json from the given URI. Args: uri (str): The URI to request, not including query parameters args (dict): A dictionary used to create query strings, defaults to None. **Note**: The value of each key is assumed to be an iterable and *not* a string. Returns: Deferred: Succeeds when we get *any* 2xx HTTP response, with the HTTP body as JSON. Raises: On a non-2xx HTTP response. The response body will be used as the error message. """ try: body = yield self.get_raw(uri, args) defer.returnValue(json.loads(body)) except CodeMessageException as e: raise self._exceptionFromFailedRequest(e.code, e.msg) @defer.inlineCallbacks def put_json(self, uri, json_body, args={}): """ Puts some json to the given URI. Args: uri (str): The URI to request, not including query parameters json_body (dict): The JSON to put in the HTTP body, args (dict): A dictionary used to create query strings, defaults to None. **Note**: The value of each key is assumed to be an iterable and *not* a string. Returns: Deferred: Succeeds when we get *any* 2xx HTTP response, with the HTTP body as JSON. Raises: On a non-2xx HTTP response. """ if len(args): query_bytes = urllib.urlencode(args, True) uri = "%s?%s" % (uri, query_bytes) json_str = encode_canonical_json(json_body) response = yield self.request( "PUT", uri.encode("ascii"), headers=Headers({ b"User-Agent": [self.user_agent], "Content-Type": ["application/json"] }), bodyProducer=FileBodyProducer(StringIO(json_str)) ) body = yield preserve_context_over_fn(readBody, response) if 200 <= response.code < 300: defer.returnValue(json.loads(body)) else: # NB: This is explicitly not json.loads(body)'d because the contract # of CodeMessageException is a *string* message. Callers can always # load it into JSON if they want. raise CodeMessageException(response.code, body) @defer.inlineCallbacks def get_raw(self, uri, args={}): """ Gets raw text from the given URI. Args: uri (str): The URI to request, not including query parameters args (dict): A dictionary used to create query strings, defaults to None. **Note**: The value of each key is assumed to be an iterable and *not* a string. Returns: Deferred: Succeeds when we get *any* 2xx HTTP response, with the HTTP body at text. Raises: On a non-2xx HTTP response. The response body will be used as the error message. """ if len(args): query_bytes = urllib.urlencode(args, True) uri = "%s?%s" % (uri, query_bytes) response = yield self.request( "GET", uri.encode("ascii"), headers=Headers({ b"User-Agent": [self.user_agent], }) ) body = yield preserve_context_over_fn(readBody, response) if 200 <= response.code < 300: defer.returnValue(body) else: raise CodeMessageException(response.code, body) def _exceptionFromFailedRequest(self, response, body): try: jsonBody = json.loads(body) errcode = jsonBody['errcode'] error = jsonBody['error'] return MatrixCodeMessageException(response.code, error, errcode) except (ValueError, KeyError): return CodeMessageException(response.code, body) # XXX: FIXME: This is horribly copy-pasted from matrixfederationclient. # The two should be factored out. @defer.inlineCallbacks def get_file(self, url, output_stream, max_size=None): """GETs a file from a given URL Args: url (str): The URL to GET output_stream (file): File to write the response body to. Returns: A (int,dict,string,int) tuple of the file length, dict of the response headers, absolute URI of the response and HTTP response code. """ response = yield self.request( "GET", url.encode("ascii"), headers=Headers({ b"User-Agent": [self.user_agent], }) ) headers = dict(response.headers.getAllRawHeaders()) if 'Content-Length' in headers and headers['Content-Length'] > max_size: logger.warn("Requested URL is too large > %r bytes" % (self.max_size,)) raise SynapseError( 502, "Requested file is too large > %r bytes" % (self.max_size,), Codes.TOO_LARGE, ) if response.code > 299: logger.warn("Got %d when downloading %s" % (response.code, url)) raise SynapseError( 502, "Got error %d" % (response.code,), Codes.UNKNOWN, ) # TODO: if our Content-Type is HTML or something, just read the first # N bytes into RAM rather than saving it all to disk only to read it # straight back in again try: length = yield preserve_context_over_fn( _readBodyToFile, response, output_stream, max_size ) except Exception as e: logger.exception("Failed to download body") raise SynapseError( 502, ("Failed to download remote body: %s" % e), Codes.UNKNOWN, ) defer.returnValue((length, headers, response.request.absoluteURI, response.code)) # XXX: FIXME: This is horribly copy-pasted from matrixfederationclient. # The two should be factored out. class _ReadBodyToFileProtocol(protocol.Protocol): def __init__(self, stream, deferred, max_size): self.stream = stream self.deferred = deferred self.length = 0 self.max_size = max_size def dataReceived(self, data): self.stream.write(data) self.length += len(data) if self.max_size is not None and self.length >= self.max_size: self.deferred.errback(SynapseError( 502, "Requested file is too large > %r bytes" % (self.max_size,), Codes.TOO_LARGE, )) self.deferred = defer.Deferred() self.transport.loseConnection() def connectionLost(self, reason): if reason.check(ResponseDone): self.deferred.callback(self.length) elif reason.check(PotentialDataLoss): # stolen from https://github.com/twisted/treq/pull/49/files # http://twistedmatrix.com/trac/ticket/4840 self.deferred.callback(self.length) else: self.deferred.errback(reason) # XXX: FIXME: This is horribly copy-pasted from matrixfederationclient. # The two should be factored out. def _readBodyToFile(response, stream, max_size): d = defer.Deferred() response.deliverBody(_ReadBodyToFileProtocol(stream, d, max_size)) return d class CaptchaServerHttpClient(SimpleHttpClient): """ Separate HTTP client for talking to google's captcha servers Only slightly special because accepts partial download responses used only by c/s api v1 """ @defer.inlineCallbacks def post_urlencoded_get_raw(self, url, args={}): query_bytes = urllib.urlencode(encode_urlencode_args(args), True) response = yield self.request( "POST", url.encode("ascii"), bodyProducer=FileBodyProducer(StringIO(query_bytes)), headers=Headers({ b"Content-Type": [b"application/x-www-form-urlencoded"], b"User-Agent": [self.user_agent], }) ) try: body = yield preserve_context_over_fn(readBody, response) defer.returnValue(body) except PartialDownloadError as e: # twisted dislikes google's response, no content length. defer.returnValue(e.response) class SpiderEndpointFactory(object): def __init__(self, hs): self.blacklist = hs.config.url_preview_ip_range_blacklist self.whitelist = hs.config.url_preview_ip_range_whitelist self.policyForHTTPS = hs.get_http_client_context_factory() def endpointForURI(self, uri): logger.info("Getting endpoint for %s", uri.toBytes()) if uri.scheme == "http": endpoint_factory = HostnameEndpoint elif uri.scheme == "https": tlsCreator = self.policyForHTTPS.creatorForNetloc(uri.host, uri.port) def endpoint_factory(reactor, host, port, **kw): return wrapClientTLS( tlsCreator, HostnameEndpoint(reactor, host, port, **kw)) else: logger.warn("Can't get endpoint for unrecognised scheme %s", uri.scheme) return None return SpiderEndpoint( reactor, uri.host, uri.port, self.blacklist, self.whitelist, endpoint=endpoint_factory, endpoint_kw_args=dict(timeout=15), ) class SpiderHttpClient(SimpleHttpClient): """ Separate HTTP client for spidering arbitrary URLs. Special in that it follows retries and has a UA that looks like a browser. used by the preview_url endpoint in the content repo. """ def __init__(self, hs): SimpleHttpClient.__init__(self, hs) # clobber the base class's agent and UA: self.agent = ContentDecoderAgent( BrowserLikeRedirectAgent( Agent.usingEndpointFactory( reactor, SpiderEndpointFactory(hs) ) ), [('gzip', GzipDecoder)] ) # We could look like Chrome: # self.user_agent = ("Mozilla/5.0 (%s) (KHTML, like Gecko) # Chrome Safari" % hs.version_string) def encode_urlencode_args(args): return {k: encode_urlencode_arg(v) for k, v in args.items()} def encode_urlencode_arg(arg): if isinstance(arg, unicode): return arg.encode('utf-8') elif isinstance(arg, list): return [encode_urlencode_arg(i) for i in arg] else: return arg def _print_ex(e): if hasattr(e, "reasons") and e.reasons: for ex in e.reasons: _print_ex(ex) else: logger.exception(e) class InsecureInterceptableContextFactory(ssl.ContextFactory): """ Factory for PyOpenSSL SSL contexts which accepts any certificate for any domain. Do not use this since it allows an attacker to intercept your communications. """ def __init__(self): self._context = SSL.Context(SSL.SSLv23_METHOD) self._context.set_verify(VERIFY_NONE, lambda *_: None) def getContext(self, hostname=None, port=None): return self._context def creatorForNetloc(self, hostname, port): return self class FileBodyProducer(TwistedFileBodyProducer): """Workaround for https://twistedmatrix.com/trac/ticket/8473 We override the pauseProducing and resumeProducing methods in twisted's FileBodyProducer so that they do not raise exceptions if the task has already completed. """ def pauseProducing(self): try: super(FileBodyProducer, self).pauseProducing() except task.TaskDone: # task has already completed pass def resumeProducing(self): try: super(FileBodyProducer, self).resumeProducing() except task.NotPaused: # task was not paused (probably because it had already completed) pass synapse-0.24.0/synapse/http/endpoint.py000066400000000000000000000333211317335640100201170ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import socket from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS from twisted.internet import defer, reactor from twisted.internet.error import ConnectError from twisted.names import client, dns from twisted.names.error import DNSNameError, DomainError import collections import logging import random import time logger = logging.getLogger(__name__) SERVER_CACHE = {} # our record of an individual server which can be tried to reach a destination. # # "host" is actually a dotted-quad or ipv6 address string. Except when there's # no SRV record, in which case it is the original hostname. _Server = collections.namedtuple( "_Server", "priority weight host port expires" ) def matrix_federation_endpoint(reactor, destination, ssl_context_factory=None, timeout=None): """Construct an endpoint for the given matrix destination. Args: reactor: Twisted reactor. destination (bytes): The name of the server to connect to. ssl_context_factory (twisted.internet.ssl.ContextFactory): Factory which generates SSL contexts to use for TLS. timeout (int): connection timeout in seconds """ domain_port = destination.split(":") domain = domain_port[0] port = int(domain_port[1]) if domain_port[1:] else None endpoint_kw_args = {} if timeout is not None: endpoint_kw_args.update(timeout=timeout) if ssl_context_factory is None: transport_endpoint = HostnameEndpoint default_port = 8008 else: def transport_endpoint(reactor, host, port, timeout): return wrapClientTLS( ssl_context_factory, HostnameEndpoint(reactor, host, port, timeout=timeout)) default_port = 8448 if port is None: return _WrappingEndpointFac(SRVClientEndpoint( reactor, "matrix", domain, protocol="tcp", default_port=default_port, endpoint=transport_endpoint, endpoint_kw_args=endpoint_kw_args )) else: return _WrappingEndpointFac(transport_endpoint( reactor, domain, port, **endpoint_kw_args )) class _WrappingEndpointFac(object): def __init__(self, endpoint_fac): self.endpoint_fac = endpoint_fac @defer.inlineCallbacks def connect(self, protocolFactory): conn = yield self.endpoint_fac.connect(protocolFactory) conn = _WrappedConnection(conn) defer.returnValue(conn) class _WrappedConnection(object): """Wraps a connection and calls abort on it if it hasn't seen any action for 2.5-3 minutes. """ __slots__ = ["conn", "last_request"] def __init__(self, conn): object.__setattr__(self, "conn", conn) object.__setattr__(self, "last_request", time.time()) def __getattr__(self, name): return getattr(self.conn, name) def __setattr__(self, name, value): setattr(self.conn, name, value) def _time_things_out_maybe(self): # We use a slightly shorter timeout here just in case the callLater is # triggered early. Paranoia ftw. # TODO: Cancel the previous callLater rather than comparing time.time()? if time.time() - self.last_request >= 2.5 * 60: self.abort() # Abort the underlying TLS connection. The abort() method calls # loseConnection() on the underlying TLS connection which tries to # shutdown the connection cleanly. We call abortConnection() # since that will promptly close the underlying TCP connection. self.transport.abortConnection() def request(self, request): self.last_request = time.time() # Time this connection out if we haven't send a request in the last # N minutes # TODO: Cancel the previous callLater? reactor.callLater(3 * 60, self._time_things_out_maybe) d = self.conn.request(request) def update_request_time(res): self.last_request = time.time() # TODO: Cancel the previous callLater? reactor.callLater(3 * 60, self._time_things_out_maybe) return res d.addCallback(update_request_time) return d class SpiderEndpoint(object): """An endpoint which refuses to connect to blacklisted IP addresses Implements twisted.internet.interfaces.IStreamClientEndpoint. """ def __init__(self, reactor, host, port, blacklist, whitelist, endpoint=HostnameEndpoint, endpoint_kw_args={}): self.reactor = reactor self.host = host self.port = port self.blacklist = blacklist self.whitelist = whitelist self.endpoint = endpoint self.endpoint_kw_args = endpoint_kw_args @defer.inlineCallbacks def connect(self, protocolFactory): address = yield self.reactor.resolve(self.host) from netaddr import IPAddress ip_address = IPAddress(address) if ip_address in self.blacklist: if self.whitelist is None or ip_address not in self.whitelist: raise ConnectError( "Refusing to spider blacklisted IP address %s" % address ) logger.info("Connecting to %s:%s", address, self.port) endpoint = self.endpoint( self.reactor, address, self.port, **self.endpoint_kw_args ) connection = yield endpoint.connect(protocolFactory) defer.returnValue(connection) class SRVClientEndpoint(object): """An endpoint which looks up SRV records for a service. Cycles through the list of servers starting with each call to connect picking the next server. Implements twisted.internet.interfaces.IStreamClientEndpoint. """ def __init__(self, reactor, service, domain, protocol="tcp", default_port=None, endpoint=HostnameEndpoint, endpoint_kw_args={}): self.reactor = reactor self.service_name = "_%s._%s.%s" % (service, protocol, domain) if default_port is not None: self.default_server = _Server( host=domain, port=default_port, priority=0, weight=0, expires=0, ) else: self.default_server = None self.endpoint = endpoint self.endpoint_kw_args = endpoint_kw_args self.servers = None self.used_servers = None @defer.inlineCallbacks def fetch_servers(self): self.used_servers = [] self.servers = yield resolve_service(self.service_name) def pick_server(self): if not self.servers: if self.used_servers: self.servers = self.used_servers self.used_servers = [] self.servers.sort() elif self.default_server: return self.default_server else: raise ConnectError( "No server available for %s" % self.service_name ) # look for all servers with the same priority min_priority = self.servers[0].priority weight_indexes = list( (index, server.weight + 1) for index, server in enumerate(self.servers) if server.priority == min_priority ) total_weight = sum(weight for index, weight in weight_indexes) target_weight = random.randint(0, total_weight) for index, weight in weight_indexes: target_weight -= weight if target_weight <= 0: server = self.servers[index] # XXX: this looks totally dubious: # # (a) we never reuse a server until we have been through # all of the servers at the same priority, so if the # weights are A: 100, B:1, we always do ABABAB instead of # AAAA...AAAB (approximately). # # (b) After using all the servers at the lowest priority, # we move onto the next priority. We should only use the # second priority if servers at the top priority are # unreachable. # del self.servers[index] self.used_servers.append(server) return server @defer.inlineCallbacks def connect(self, protocolFactory): if self.servers is None: yield self.fetch_servers() server = self.pick_server() logger.info("Connecting to %s:%s", server.host, server.port) endpoint = self.endpoint( self.reactor, server.host, server.port, **self.endpoint_kw_args ) connection = yield endpoint.connect(protocolFactory) defer.returnValue(connection) @defer.inlineCallbacks def resolve_service(service_name, dns_client=client, cache=SERVER_CACHE, clock=time): cache_entry = cache.get(service_name, None) if cache_entry: if all(s.expires > int(clock.time()) for s in cache_entry): servers = list(cache_entry) defer.returnValue(servers) servers = [] try: try: answers, _, _ = yield dns_client.lookupService(service_name) except DNSNameError: defer.returnValue([]) if (len(answers) == 1 and answers[0].type == dns.SRV and answers[0].payload and answers[0].payload.target == dns.Name('.')): raise ConnectError("Service %s unavailable" % service_name) for answer in answers: if answer.type != dns.SRV or not answer.payload: continue payload = answer.payload hosts = yield _get_hosts_for_srv_record( dns_client, str(payload.target) ) for (ip, ttl) in hosts: host_ttl = min(answer.ttl, ttl) servers.append(_Server( host=ip, port=int(payload.port), priority=int(payload.priority), weight=int(payload.weight), expires=int(clock.time()) + host_ttl, )) servers.sort() cache[service_name] = list(servers) except DomainError as e: # We failed to resolve the name (other than a NameError) # Try something in the cache, else rereaise cache_entry = cache.get(service_name, None) if cache_entry: logger.warn( "Failed to resolve %r, falling back to cache. %r", service_name, e ) servers = list(cache_entry) else: raise e defer.returnValue(servers) @defer.inlineCallbacks def _get_hosts_for_srv_record(dns_client, host): """Look up each of the hosts in a SRV record Args: dns_client (twisted.names.dns.IResolver): host (basestring): host to look up Returns: Deferred[list[(str, int)]]: a list of (host, ttl) pairs """ ip4_servers = [] ip6_servers = [] def cb(res): # lookupAddress and lookupIP6Address return a three-tuple # giving the answer, authority, and additional sections of the # response. # # we only care about the answers. return res[0] def eb(res, record_type): if res.check(DNSNameError): return [] logger.warn("Error looking up %s for %s: %s", record_type, host, res, res.value) return res # no logcontexts here, so we can safely fire these off and gatherResults d1 = dns_client.lookupAddress(host).addCallbacks(cb, eb) d2 = dns_client.lookupIPV6Address(host).addCallbacks(cb, eb) results = yield defer.DeferredList( [d1, d2], consumeErrors=True) # if all of the lookups failed, raise an exception rather than blowing out # the cache with an empty result. if results and all(s == defer.FAILURE for (s, _) in results): defer.returnValue(results[0][1]) for (success, result) in results: if success == defer.FAILURE: continue for answer in result: if not answer.payload: continue try: if answer.type == dns.A: ip = answer.payload.dottedQuad() ip4_servers.append((ip, answer.ttl)) elif answer.type == dns.AAAA: ip = socket.inet_ntop( socket.AF_INET6, answer.payload.address, ) ip6_servers.append((ip, answer.ttl)) else: # the most likely candidate here is a CNAME record. # rfc2782 says srvs may not point to aliases. logger.warn( "Ignoring unexpected DNS record type %s for %s", answer.type, host, ) continue except Exception as e: logger.warn("Ignoring invalid DNS response for %s: %s", host, e) continue # keep the ipv4 results before the ipv6 results, mostly to match historical # behaviour. defer.returnValue(ip4_servers + ip6_servers) synapse-0.24.0/synapse/http/matrixfederationclient.py000066400000000000000000000555301317335640100230510ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import synapse.util.retryutils from twisted.internet import defer, reactor, protocol from twisted.internet.error import DNSLookupError from twisted.web.client import readBody, HTTPConnectionPool, Agent from twisted.web.http_headers import Headers from twisted.web._newclient import ResponseDone from synapse.http.endpoint import matrix_federation_endpoint from synapse.util.async import sleep from synapse.util import logcontext import synapse.metrics from canonicaljson import encode_canonical_json from synapse.api.errors import ( SynapseError, Codes, HttpResponseException, ) from signedjson.sign import sign_json import cgi import simplejson as json import logging import random import sys import urllib import urlparse logger = logging.getLogger(__name__) outbound_logger = logging.getLogger("synapse.http.outbound") metrics = synapse.metrics.get_metrics_for(__name__) outgoing_requests_counter = metrics.register_counter( "requests", labels=["method"], ) incoming_responses_counter = metrics.register_counter( "responses", labels=["method", "code"], ) MAX_LONG_RETRIES = 10 MAX_SHORT_RETRIES = 3 class MatrixFederationEndpointFactory(object): def __init__(self, hs): self.tls_server_context_factory = hs.tls_server_context_factory def endpointForURI(self, uri): destination = uri.netloc return matrix_federation_endpoint( reactor, destination, timeout=10, ssl_context_factory=self.tls_server_context_factory ) class MatrixFederationHttpClient(object): """HTTP client used to talk to other homeservers over the federation protocol. Send client certificates and signs requests. Attributes: agent (twisted.web.client.Agent): The twisted Agent used to send the requests. """ def __init__(self, hs): self.hs = hs self.signing_key = hs.config.signing_key[0] self.server_name = hs.hostname pool = HTTPConnectionPool(reactor) pool.maxPersistentPerHost = 5 pool.cachedConnectionTimeout = 2 * 60 self.agent = Agent.usingEndpointFactory( reactor, MatrixFederationEndpointFactory(hs), pool=pool ) self.clock = hs.get_clock() self._store = hs.get_datastore() self.version_string = hs.version_string self._next_id = 1 def _create_url(self, destination, path_bytes, param_bytes, query_bytes): return urlparse.urlunparse( ("matrix", destination, path_bytes, param_bytes, query_bytes, "") ) @defer.inlineCallbacks def _request(self, destination, method, path, body_callback, headers_dict={}, param_bytes=b"", query_bytes=b"", retry_on_dns_fail=True, timeout=None, long_retries=False, ignore_backoff=False, backoff_on_404=False): """ Creates and sends a request to the given server Args: destination (str): The remote server to send the HTTP request to. method (str): HTTP method path (str): The HTTP path ignore_backoff (bool): true to ignore the historical backoff data and try the request anyway. backoff_on_404 (bool): Back off if we get a 404 Returns: Deferred: resolves with the http response object on success. Fails with ``HTTPRequestException``: if we get an HTTP response code >= 300. Fails with ``NotRetryingDestination`` if we are not yet ready to retry this server. (May also fail with plenty of other Exceptions for things like DNS failures, connection failures, SSL failures.) """ limiter = yield synapse.util.retryutils.get_retry_limiter( destination, self.clock, self._store, backoff_on_404=backoff_on_404, ignore_backoff=ignore_backoff, ) destination = destination.encode("ascii") path_bytes = path.encode("ascii") with limiter: headers_dict[b"User-Agent"] = [self.version_string] headers_dict[b"Host"] = [destination] url_bytes = self._create_url( destination, path_bytes, param_bytes, query_bytes ) txn_id = "%s-O-%s" % (method, self._next_id) self._next_id = (self._next_id + 1) % (sys.maxint - 1) outbound_logger.info( "{%s} [%s] Sending request: %s %s", txn_id, destination, method, url_bytes ) # XXX: Would be much nicer to retry only at the transaction-layer # (once we have reliable transactions in place) if long_retries: retries_left = MAX_LONG_RETRIES else: retries_left = MAX_SHORT_RETRIES http_url_bytes = urlparse.urlunparse( ("", "", path_bytes, param_bytes, query_bytes, "") ) log_result = None try: while True: producer = None if body_callback: producer = body_callback(method, http_url_bytes, headers_dict) try: def send_request(): request_deferred = self.agent.request( method, url_bytes, Headers(headers_dict), producer ) return self.clock.time_bound_deferred( request_deferred, time_out=timeout / 1000. if timeout else 60, ) with logcontext.PreserveLoggingContext(): response = yield send_request() log_result = "%d %s" % (response.code, response.phrase,) break except Exception as e: if not retry_on_dns_fail and isinstance(e, DNSLookupError): logger.warn( "DNS Lookup failed to %s with %s", destination, e ) log_result = "DNS Lookup failed to %s with %s" % ( destination, e ) raise logger.warn( "{%s} Sending request failed to %s: %s %s: %s", txn_id, destination, method, url_bytes, _flatten_response_never_received(e), ) log_result = _flatten_response_never_received(e) if retries_left and not timeout: if long_retries: delay = 4 ** (MAX_LONG_RETRIES + 1 - retries_left) delay = min(delay, 60) delay *= random.uniform(0.8, 1.4) else: delay = 0.5 * 2 ** (MAX_SHORT_RETRIES - retries_left) delay = min(delay, 2) delay *= random.uniform(0.8, 1.4) yield sleep(delay) retries_left -= 1 else: raise finally: outbound_logger.info( "{%s} [%s] Result: %s", txn_id, destination, log_result, ) if 200 <= response.code < 300: pass else: # :'( # Update transactions table? with logcontext.PreserveLoggingContext(): body = yield readBody(response) raise HttpResponseException( response.code, response.phrase, body ) defer.returnValue(response) def sign_request(self, destination, method, url_bytes, headers_dict, content=None): request = { "method": method, "uri": url_bytes, "origin": self.server_name, "destination": destination, } if content is not None: request["content"] = content request = sign_json(request, self.server_name, self.signing_key) auth_headers = [] for key, sig in request["signatures"][self.server_name].items(): auth_headers.append(bytes( "X-Matrix origin=%s,key=\"%s\",sig=\"%s\"" % ( self.server_name, key, sig, ) )) headers_dict[b"Authorization"] = auth_headers @defer.inlineCallbacks def put_json(self, destination, path, data={}, json_data_callback=None, long_retries=False, timeout=None, ignore_backoff=False, backoff_on_404=False): """ Sends the specifed json data using PUT Args: destination (str): The remote server to send the HTTP request to. path (str): The HTTP path. data (dict): A dict containing the data that will be used as the request body. This will be encoded as JSON. json_data_callback (callable): A callable returning the dict to use as the request body. long_retries (bool): A boolean that indicates whether we should retry for a short or long time. timeout(int): How long to try (in ms) the destination for before giving up. None indicates no timeout. ignore_backoff (bool): true to ignore the historical backoff data and try the request anyway. backoff_on_404 (bool): True if we should count a 404 response as a failure of the server (and should therefore back off future requests) Returns: Deferred: Succeeds when we get a 2xx HTTP response. The result will be the decoded JSON body. Fails with ``HTTPRequestException`` if we get an HTTP response code >= 300. Fails with ``NotRetryingDestination`` if we are not yet ready to retry this server. """ if not json_data_callback: def json_data_callback(): return data def body_callback(method, url_bytes, headers_dict): json_data = json_data_callback() self.sign_request( destination, method, url_bytes, headers_dict, json_data ) producer = _JsonProducer(json_data) return producer response = yield self._request( destination, "PUT", path, body_callback=body_callback, headers_dict={"Content-Type": ["application/json"]}, long_retries=long_retries, timeout=timeout, ignore_backoff=ignore_backoff, backoff_on_404=backoff_on_404, ) if 200 <= response.code < 300: # We need to update the transactions table to say it was sent? check_content_type_is_json(response.headers) with logcontext.PreserveLoggingContext(): body = yield readBody(response) defer.returnValue(json.loads(body)) @defer.inlineCallbacks def post_json(self, destination, path, data={}, long_retries=False, timeout=None, ignore_backoff=False, args={}): """ Sends the specifed json data using POST Args: destination (str): The remote server to send the HTTP request to. path (str): The HTTP path. data (dict): A dict containing the data that will be used as the request body. This will be encoded as JSON. long_retries (bool): A boolean that indicates whether we should retry for a short or long time. timeout(int): How long to try (in ms) the destination for before giving up. None indicates no timeout. ignore_backoff (bool): true to ignore the historical backoff data and try the request anyway. Returns: Deferred: Succeeds when we get a 2xx HTTP response. The result will be the decoded JSON body. Fails with ``HTTPRequestException`` if we get an HTTP response code >= 300. Fails with ``NotRetryingDestination`` if we are not yet ready to retry this server. """ def body_callback(method, url_bytes, headers_dict): self.sign_request( destination, method, url_bytes, headers_dict, data ) return _JsonProducer(data) response = yield self._request( destination, "POST", path, query_bytes=encode_query_args(args), body_callback=body_callback, headers_dict={"Content-Type": ["application/json"]}, long_retries=long_retries, timeout=timeout, ignore_backoff=ignore_backoff, ) if 200 <= response.code < 300: # We need to update the transactions table to say it was sent? check_content_type_is_json(response.headers) with logcontext.PreserveLoggingContext(): body = yield readBody(response) defer.returnValue(json.loads(body)) @defer.inlineCallbacks def get_json(self, destination, path, args={}, retry_on_dns_fail=True, timeout=None, ignore_backoff=False): """ GETs some json from the given host homeserver and path Args: destination (str): The remote server to send the HTTP request to. path (str): The HTTP path. args (dict): A dictionary used to create query strings, defaults to None. timeout (int): How long to try (in ms) the destination for before giving up. None indicates no timeout and that the request will be retried. ignore_backoff (bool): true to ignore the historical backoff data and try the request anyway. Returns: Deferred: Succeeds when we get a 2xx HTTP response. The result will be the decoded JSON body. Fails with ``HTTPRequestException`` if we get an HTTP response code >= 300. Fails with ``NotRetryingDestination`` if we are not yet ready to retry this server. """ logger.debug("get_json args: %s", args) logger.debug("Query bytes: %s Retry DNS: %s", args, retry_on_dns_fail) def body_callback(method, url_bytes, headers_dict): self.sign_request(destination, method, url_bytes, headers_dict) return None response = yield self._request( destination, "GET", path, query_bytes=encode_query_args(args), body_callback=body_callback, retry_on_dns_fail=retry_on_dns_fail, timeout=timeout, ignore_backoff=ignore_backoff, ) if 200 <= response.code < 300: # We need to update the transactions table to say it was sent? check_content_type_is_json(response.headers) with logcontext.PreserveLoggingContext(): body = yield readBody(response) defer.returnValue(json.loads(body)) @defer.inlineCallbacks def delete_json(self, destination, path, long_retries=False, timeout=None, ignore_backoff=False, args={}): """Send a DELETE request to the remote expecting some json response Args: destination (str): The remote server to send the HTTP request to. path (str): The HTTP path. long_retries (bool): A boolean that indicates whether we should retry for a short or long time. timeout(int): How long to try (in ms) the destination for before giving up. None indicates no timeout. ignore_backoff (bool): true to ignore the historical backoff data and try the request anyway. Returns: Deferred: Succeeds when we get a 2xx HTTP response. The result will be the decoded JSON body. Fails with ``HTTPRequestException`` if we get an HTTP response code >= 300. Fails with ``NotRetryingDestination`` if we are not yet ready to retry this server. """ response = yield self._request( destination, "DELETE", path, query_bytes=encode_query_args(args), headers_dict={"Content-Type": ["application/json"]}, long_retries=long_retries, timeout=timeout, ignore_backoff=ignore_backoff, ) if 200 <= response.code < 300: # We need to update the transactions table to say it was sent? check_content_type_is_json(response.headers) with logcontext.PreserveLoggingContext(): body = yield readBody(response) defer.returnValue(json.loads(body)) @defer.inlineCallbacks def get_file(self, destination, path, output_stream, args={}, retry_on_dns_fail=True, max_size=None, ignore_backoff=False): """GETs a file from a given homeserver Args: destination (str): The remote server to send the HTTP request to. path (str): The HTTP path to GET. output_stream (file): File to write the response body to. args (dict): Optional dictionary used to create the query string. ignore_backoff (bool): true to ignore the historical backoff data and try the request anyway. Returns: Deferred: resolves with an (int,dict) tuple of the file length and a dict of the response headers. Fails with ``HTTPRequestException`` if we get an HTTP response code >= 300 Fails with ``NotRetryingDestination`` if we are not yet ready to retry this server. """ encoded_args = {} for k, vs in args.items(): if isinstance(vs, basestring): vs = [vs] encoded_args[k] = [v.encode("UTF-8") for v in vs] query_bytes = urllib.urlencode(encoded_args, True) logger.debug("Query bytes: %s Retry DNS: %s", query_bytes, retry_on_dns_fail) def body_callback(method, url_bytes, headers_dict): self.sign_request(destination, method, url_bytes, headers_dict) return None response = yield self._request( destination, "GET", path, query_bytes=query_bytes, body_callback=body_callback, retry_on_dns_fail=retry_on_dns_fail, ignore_backoff=ignore_backoff, ) headers = dict(response.headers.getAllRawHeaders()) try: with logcontext.PreserveLoggingContext(): length = yield _readBodyToFile( response, output_stream, max_size ) except: logger.exception("Failed to download body") raise defer.returnValue((length, headers)) class _ReadBodyToFileProtocol(protocol.Protocol): def __init__(self, stream, deferred, max_size): self.stream = stream self.deferred = deferred self.length = 0 self.max_size = max_size def dataReceived(self, data): self.stream.write(data) self.length += len(data) if self.max_size is not None and self.length >= self.max_size: self.deferred.errback(SynapseError( 502, "Requested file is too large > %r bytes" % (self.max_size,), Codes.TOO_LARGE, )) self.deferred = defer.Deferred() self.transport.loseConnection() def connectionLost(self, reason): if reason.check(ResponseDone): self.deferred.callback(self.length) else: self.deferred.errback(reason) def _readBodyToFile(response, stream, max_size): d = defer.Deferred() response.deliverBody(_ReadBodyToFileProtocol(stream, d, max_size)) return d class _JsonProducer(object): """ Used by the twisted http client to create the HTTP body from json """ def __init__(self, jsn): self.reset(jsn) def reset(self, jsn): self.body = encode_canonical_json(jsn) self.length = len(self.body) def startProducing(self, consumer): consumer.write(self.body) return defer.succeed(None) def pauseProducing(self): pass def stopProducing(self): pass def resumeProducing(self): pass def _flatten_response_never_received(e): if hasattr(e, "reasons"): reasons = ", ".join( _flatten_response_never_received(f.value) for f in e.reasons ) return "%s:[%s]" % (type(e).__name__, reasons) else: return repr(e) def check_content_type_is_json(headers): """ Check that a set of HTTP headers have a Content-Type header, and that it is application/json. Args: headers (twisted.web.http_headers.Headers): headers to check Raises: RuntimeError if the """ c_type = headers.getRawHeaders("Content-Type") if c_type is None: raise RuntimeError( "No Content-Type header" ) c_type = c_type[0] # only the first header val, options = cgi.parse_header(c_type) if val != "application/json": raise RuntimeError( "Content-Type not application/json: was '%s'" % c_type ) def encode_query_args(args): encoded_args = {} for k, vs in args.items(): if isinstance(vs, basestring): vs = [vs] encoded_args[k] = [v.encode("UTF-8") for v in vs] query_bytes = urllib.urlencode(encoded_args, True) return query_bytes synapse-0.24.0/synapse/http/server.py000066400000000000000000000400111317335640100175770ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.api.errors import ( cs_exception, SynapseError, CodeMessageException, UnrecognizedRequestError, Codes ) from synapse.util.logcontext import LoggingContext, PreserveLoggingContext from synapse.util.caches import intern_dict from synapse.util.metrics import Measure import synapse.metrics import synapse.events from canonicaljson import ( encode_canonical_json, encode_pretty_printed_json ) from twisted.internet import defer from twisted.web import server, resource from twisted.web.server import NOT_DONE_YET from twisted.web.util import redirectTo import collections import logging import urllib import ujson logger = logging.getLogger(__name__) metrics = synapse.metrics.get_metrics_for(__name__) incoming_requests_counter = metrics.register_counter( "requests", labels=["method", "servlet", "tag"], ) outgoing_responses_counter = metrics.register_counter( "responses", labels=["method", "code"], ) response_timer = metrics.register_distribution( "response_time", labels=["method", "servlet", "tag"] ) response_ru_utime = metrics.register_distribution( "response_ru_utime", labels=["method", "servlet", "tag"] ) response_ru_stime = metrics.register_distribution( "response_ru_stime", labels=["method", "servlet", "tag"] ) response_db_txn_count = metrics.register_distribution( "response_db_txn_count", labels=["method", "servlet", "tag"] ) response_db_txn_duration = metrics.register_distribution( "response_db_txn_duration", labels=["method", "servlet", "tag"] ) _next_request_id = 0 def request_handler(include_metrics=False): """Decorator for ``wrap_request_handler``""" return lambda request_handler: wrap_request_handler(request_handler, include_metrics) def wrap_request_handler(request_handler, include_metrics=False): """Wraps a method that acts as a request handler with the necessary logging and exception handling. The method must have a signature of "handle_foo(self, request)". The argument "self" must have "version_string" and "clock" attributes. The argument "request" must be a twisted HTTP request. The method must return a deferred. If the deferred succeeds we assume that a response has been sent. If the deferred fails with a SynapseError we use it to send a JSON response with the appropriate HTTP reponse code. If the deferred fails with any other type of error we send a 500 reponse. We insert a unique request-id into the logging context for this request and log the response and duration for this request. """ @defer.inlineCallbacks def wrapped_request_handler(self, request): global _next_request_id request_id = "%s-%s" % (request.method, _next_request_id) _next_request_id += 1 with LoggingContext(request_id) as request_context: with Measure(self.clock, "wrapped_request_handler"): request_metrics = RequestMetrics() request_metrics.start(self.clock, name=self.__class__.__name__) request_context.request = request_id with request.processing(): try: with PreserveLoggingContext(request_context): if include_metrics: yield request_handler(self, request, request_metrics) else: yield request_handler(self, request) except CodeMessageException as e: code = e.code if isinstance(e, SynapseError): logger.info( "%s SynapseError: %s - %s", request, code, e.msg ) else: logger.exception(e) outgoing_responses_counter.inc(request.method, str(code)) respond_with_json( request, code, cs_exception(e), send_cors=True, pretty_print=_request_user_agent_is_curl(request), version_string=self.version_string, ) except: logger.exception( "Failed handle request %s.%s on %r: %r", request_handler.__module__, request_handler.__name__, self, request ) respond_with_json( request, 500, { "error": "Internal server error", "errcode": Codes.UNKNOWN, }, send_cors=True, pretty_print=_request_user_agent_is_curl(request), version_string=self.version_string, ) finally: try: request_metrics.stop( self.clock, request ) except Exception as e: logger.warn("Failed to stop metrics: %r", e) return wrapped_request_handler class HttpServer(object): """ Interface for registering callbacks on a HTTP server """ def register_paths(self, method, path_patterns, callback): """ Register a callback that gets fired if we receive a http request with the given method for a path that matches the given regex. If the regex contains groups these gets passed to the calback via an unpacked tuple. Args: method (str): The method to listen to. path_patterns (list): The regex used to match requests. callback (function): The function to fire if we receive a matched request. The first argument will be the request object and subsequent arguments will be any matched groups from the regex. This should return a tuple of (code, response). """ pass class JsonResource(HttpServer, resource.Resource): """ This implements the HttpServer interface and provides JSON support for Resources. Register callbacks via register_path() Callbacks can return a tuple of status code and a dict in which case the the dict will automatically be sent to the client as a JSON object. The JsonResource is primarily intended for returning JSON, but callbacks may send something other than JSON, they may do so by using the methods on the request object and instead returning None. """ isLeaf = True _PathEntry = collections.namedtuple("_PathEntry", ["pattern", "callback"]) def __init__(self, hs, canonical_json=True): resource.Resource.__init__(self) self.canonical_json = canonical_json self.clock = hs.get_clock() self.path_regexs = {} self.version_string = hs.version_string self.hs = hs def register_paths(self, method, path_patterns, callback): for path_pattern in path_patterns: logger.debug("Registering for %s %s", method, path_pattern.pattern) self.path_regexs.setdefault(method, []).append( self._PathEntry(path_pattern, callback) ) def render(self, request): """ This gets called by twisted every time someone sends us a request. """ self._async_render(request) return server.NOT_DONE_YET # Disable metric reporting because _async_render does its own metrics. # It does its own metric reporting because _async_render dispatches to # a callback and it's the class name of that callback we want to report # against rather than the JsonResource itself. @request_handler(include_metrics=True) @defer.inlineCallbacks def _async_render(self, request, request_metrics): """ This gets called from render() every time someone sends us a request. This checks if anyone has registered a callback for that method and path. """ if request.method == "OPTIONS": self._send_response(request, 200, {}) return # Loop through all the registered callbacks to check if the method # and path regex match for path_entry in self.path_regexs.get(request.method, []): m = path_entry.pattern.match(request.path) if not m: continue # We found a match! Trigger callback and then return the # returned response. We pass both the request and any # matched groups from the regex to the callback. callback = path_entry.callback kwargs = intern_dict({ name: urllib.unquote(value).decode("UTF-8") if value else value for name, value in m.groupdict().items() }) callback_return = yield callback(request, **kwargs) if callback_return is not None: code, response = callback_return self._send_response(request, code, response) servlet_instance = getattr(callback, "__self__", None) if servlet_instance is not None: servlet_classname = servlet_instance.__class__.__name__ else: servlet_classname = "%r" % callback request_metrics.name = servlet_classname return # Huh. No one wanted to handle that? Fiiiiiine. Send 400. raise UnrecognizedRequestError() def _send_response(self, request, code, response_json_object, response_code_message=None): # could alternatively use request.notifyFinish() and flip a flag when # the Deferred fires, but since the flag is RIGHT THERE it seems like # a waste. if request._disconnected: logger.warn( "Not sending response to request %s, already disconnected.", request) return outgoing_responses_counter.inc(request.method, str(code)) # TODO: Only enable CORS for the requests that need it. respond_with_json( request, code, response_json_object, send_cors=True, response_code_message=response_code_message, pretty_print=_request_user_agent_is_curl(request), version_string=self.version_string, canonical_json=self.canonical_json, ) class RequestMetrics(object): def start(self, clock, name): self.start = clock.time_msec() self.start_context = LoggingContext.current_context() self.name = name def stop(self, clock, request): context = LoggingContext.current_context() tag = "" if context: tag = context.tag if context != self.start_context: logger.warn( "Context have unexpectedly changed %r, %r", context, self.start_context ) return incoming_requests_counter.inc(request.method, self.name, tag) response_timer.inc_by( clock.time_msec() - self.start, request.method, self.name, tag ) ru_utime, ru_stime = context.get_resource_usage() response_ru_utime.inc_by( ru_utime, request.method, self.name, tag ) response_ru_stime.inc_by( ru_stime, request.method, self.name, tag ) response_db_txn_count.inc_by( context.db_txn_count, request.method, self.name, tag ) response_db_txn_duration.inc_by( context.db_txn_duration, request.method, self.name, tag ) class RootRedirect(resource.Resource): """Redirects the root '/' path to another path.""" def __init__(self, path): resource.Resource.__init__(self) self.url = path def render_GET(self, request): return redirectTo(self.url, request) def getChild(self, name, request): if len(name) == 0: return self # select ourselves as the child to render return resource.Resource.getChild(self, name, request) def respond_with_json(request, code, json_object, send_cors=False, response_code_message=None, pretty_print=False, version_string="", canonical_json=True): if pretty_print: json_bytes = encode_pretty_printed_json(json_object) + "\n" else: if canonical_json or synapse.events.USE_FROZEN_DICTS: json_bytes = encode_canonical_json(json_object) else: # ujson doesn't like frozen_dicts. json_bytes = ujson.dumps(json_object, ensure_ascii=False) return respond_with_json_bytes( request, code, json_bytes, send_cors=send_cors, response_code_message=response_code_message, version_string=version_string ) def respond_with_json_bytes(request, code, json_bytes, send_cors=False, version_string="", response_code_message=None): """Sends encoded JSON in response to the given request. Args: request (twisted.web.http.Request): The http request to respond to. code (int): The HTTP response code. json_bytes (bytes): The json bytes to use as the response body. send_cors (bool): Whether to send Cross-Origin Resource Sharing headers http://www.w3.org/TR/cors/ Returns: twisted.web.server.NOT_DONE_YET""" request.setResponseCode(code, message=response_code_message) request.setHeader(b"Content-Type", b"application/json") request.setHeader(b"Server", version_string) request.setHeader(b"Content-Length", b"%d" % (len(json_bytes),)) if send_cors: set_cors_headers(request) request.write(json_bytes) finish_request(request) return NOT_DONE_YET def set_cors_headers(request): """Set the CORs headers so that javascript running in a web browsers can use this API Args: request (twisted.web.http.Request): The http request to add CORs to. """ request.setHeader("Access-Control-Allow-Origin", "*") request.setHeader( "Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS" ) request.setHeader( "Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept, Authorization" ) def finish_request(request): """ Finish writing the response to the request. Twisted throws a RuntimeException if the connection closed before the response was written but doesn't provide a convenient or reliable way to determine if the connection was closed. So we catch and log the RuntimeException You might think that ``request.notifyFinish`` could be used to tell if the request was finished. However the deferred it returns won't fire if the connection was already closed, meaning we'd have to have called the method right at the start of the request. By the time we want to write the response it will already be too late. """ try: request.finish() except RuntimeError as e: logger.info("Connection disconnected before response was written: %r", e) def _request_user_agent_is_curl(request): user_agents = request.requestHeaders.getRawHeaders( "User-Agent", default=[] ) for user_agent in user_agents: if "curl" in user_agent: return True return False synapse-0.24.0/synapse/http/servlet.py000066400000000000000000000172631317335640100177720ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This module contains base REST classes for constructing REST servlets. """ from synapse.api.errors import SynapseError, Codes import logging import simplejson logger = logging.getLogger(__name__) def parse_integer(request, name, default=None, required=False): """Parse an integer parameter from the request string Args: request: the twisted HTTP request. name (str): the name of the query parameter. default (int|None): value to use if the parameter is absent, defaults to None. required (bool): whether to raise a 400 SynapseError if the parameter is absent, defaults to False. Returns: int|None: An int value or the default. Raises: SynapseError: if the parameter is absent and required, or if the parameter is present and not an integer. """ return parse_integer_from_args(request.args, name, default, required) def parse_integer_from_args(args, name, default=None, required=False): if name in args: try: return int(args[name][0]) except: message = "Query parameter %r must be an integer" % (name,) raise SynapseError(400, message) else: if required: message = "Missing integer query parameter %r" % (name,) raise SynapseError(400, message, errcode=Codes.MISSING_PARAM) else: return default def parse_boolean(request, name, default=None, required=False): """Parse a boolean parameter from the request query string Args: request: the twisted HTTP request. name (str): the name of the query parameter. default (bool|None): value to use if the parameter is absent, defaults to None. required (bool): whether to raise a 400 SynapseError if the parameter is absent, defaults to False. Returns: bool|None: A bool value or the default. Raises: SynapseError: if the parameter is absent and required, or if the parameter is present and not one of "true" or "false". """ return parse_boolean_from_args(request.args, name, default, required) def parse_boolean_from_args(args, name, default=None, required=False): if name in args: try: return { "true": True, "false": False, }[args[name][0]] except: message = ( "Boolean query parameter %r must be one of" " ['true', 'false']" ) % (name,) raise SynapseError(400, message) else: if required: message = "Missing boolean query parameter %r" % (name,) raise SynapseError(400, message, errcode=Codes.MISSING_PARAM) else: return default def parse_string(request, name, default=None, required=False, allowed_values=None, param_type="string"): """Parse a string parameter from the request query string. Args: request: the twisted HTTP request. name (str): the name of the query parameter. default (str|None): value to use if the parameter is absent, defaults to None. required (bool): whether to raise a 400 SynapseError if the parameter is absent, defaults to False. allowed_values (list[str]): List of allowed values for the string, or None if any value is allowed, defaults to None Returns: str|None: A string value or the default. Raises: SynapseError if the parameter is absent and required, or if the parameter is present, must be one of a list of allowed values and is not one of those allowed values. """ return parse_string_from_args( request.args, name, default, required, allowed_values, param_type, ) def parse_string_from_args(args, name, default=None, required=False, allowed_values=None, param_type="string"): if name in args: value = args[name][0] if allowed_values is not None and value not in allowed_values: message = "Query parameter %r must be one of [%s]" % ( name, ", ".join(repr(v) for v in allowed_values) ) raise SynapseError(400, message) else: return value else: if required: message = "Missing %s query parameter %r" % (param_type, name) raise SynapseError(400, message, errcode=Codes.MISSING_PARAM) else: return default def parse_json_value_from_request(request): """Parse a JSON value from the body of a twisted HTTP request. Args: request: the twisted HTTP request. Returns: The JSON value. Raises: SynapseError if the request body couldn't be decoded as JSON. """ try: content_bytes = request.content.read() except: raise SynapseError(400, "Error reading JSON content.") try: content = simplejson.loads(content_bytes) except simplejson.JSONDecodeError: raise SynapseError(400, "Content not JSON.", errcode=Codes.NOT_JSON) return content def parse_json_object_from_request(request): """Parse a JSON object from the body of a twisted HTTP request. Args: request: the twisted HTTP request. Raises: SynapseError if the request body couldn't be decoded as JSON or if it wasn't a JSON object. """ content = parse_json_value_from_request(request) if type(content) != dict: message = "Content must be a JSON object." raise SynapseError(400, message, errcode=Codes.BAD_JSON) return content def assert_params_in_request(body, required): absent = [] for k in required: if k not in body: absent.append(k) if len(absent) > 0: raise SynapseError(400, "Missing params: %r" % absent, Codes.MISSING_PARAM) class RestServlet(object): """ A Synapse REST Servlet. An implementing class can either provide its own custom 'register' method, or use the automatic pattern handling provided by the base class. To use this latter, the implementing class instead provides a `PATTERN` class attribute containing a pre-compiled regular expression. The automatic register method will then use this method to register any of the following instance methods associated with the corresponding HTTP method: on_GET on_PUT on_POST on_DELETE on_OPTIONS Automatically handles turning CodeMessageExceptions thrown by these methods into the appropriate HTTP response. """ def register(self, http_server): """ Register this servlet with the given HTTP server. """ if hasattr(self, "PATTERNS"): patterns = self.PATTERNS for method in ("GET", "PUT", "POST", "OPTIONS", "DELETE"): if hasattr(self, "on_%s" % (method,)): method_handler = getattr(self, "on_%s" % (method,)) http_server.register_paths(method, patterns, method_handler) else: raise NotImplementedError("RestServlet must register something.") synapse-0.24.0/synapse/http/site.py000066400000000000000000000110261317335640100172410ustar00rootroot00000000000000# Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.util.logcontext import LoggingContext from twisted.web.server import Site, Request import contextlib import logging import re import time ACCESS_TOKEN_RE = re.compile(r'(\?.*access(_|%5[Ff])token=)[^&]*(.*)$') class SynapseRequest(Request): def __init__(self, site, *args, **kw): Request.__init__(self, *args, **kw) self.site = site self.authenticated_entity = None self.start_time = 0 def __repr__(self): # We overwrite this so that we don't log ``access_token`` return '<%s at 0x%x method=%s uri=%s clientproto=%s site=%s>' % ( self.__class__.__name__, id(self), self.method, self.get_redacted_uri(), self.clientproto, self.site.site_tag, ) def get_redacted_uri(self): return ACCESS_TOKEN_RE.sub( r'\1\3', self.uri ) def get_user_agent(self): return self.requestHeaders.getRawHeaders("User-Agent", [None])[-1] def started_processing(self): self.site.access_logger.info( "%s - %s - Received request: %s %s", self.getClientIP(), self.site.site_tag, self.method, self.get_redacted_uri() ) self.start_time = int(time.time() * 1000) def finished_processing(self): try: context = LoggingContext.current_context() ru_utime, ru_stime = context.get_resource_usage() db_txn_count = context.db_txn_count db_txn_duration = context.db_txn_duration except: ru_utime, ru_stime = (0, 0) db_txn_count, db_txn_duration = (0, 0) self.site.access_logger.info( "%s - %s - {%s}" " Processed request: %dms (%dms, %dms) (%dms/%d)" " %sB %s \"%s %s %s\" \"%s\"", self.getClientIP(), self.site.site_tag, self.authenticated_entity, int(time.time() * 1000) - self.start_time, int(ru_utime * 1000), int(ru_stime * 1000), int(db_txn_duration * 1000), int(db_txn_count), self.sentLength, self.code, self.method, self.get_redacted_uri(), self.clientproto, self.get_user_agent(), ) @contextlib.contextmanager def processing(self): self.started_processing() yield self.finished_processing() class XForwardedForRequest(SynapseRequest): def __init__(self, *args, **kw): SynapseRequest.__init__(self, *args, **kw) """ Add a layer on top of another request that only uses the value of an X-Forwarded-For header as the result of C{getClientIP}. """ def getClientIP(self): """ @return: The client address (the first address) in the value of the I{X-Forwarded-For header}. If the header is not present, return C{b"-"}. """ return self.requestHeaders.getRawHeaders( b"x-forwarded-for", [b"-"])[0].split(b",")[0].strip() class SynapseRequestFactory(object): def __init__(self, site, x_forwarded_for): self.site = site self.x_forwarded_for = x_forwarded_for def __call__(self, *args, **kwargs): if self.x_forwarded_for: return XForwardedForRequest(self.site, *args, **kwargs) else: return SynapseRequest(self.site, *args, **kwargs) class SynapseSite(Site): """ Subclass of a twisted http Site that does access logging with python's standard logging """ def __init__(self, logger_name, site_tag, config, resource, *args, **kwargs): Site.__init__(self, resource, *args, **kwargs) self.site_tag = site_tag proxied = config.get("x_forwarded", False) self.requestFactory = SynapseRequestFactory(self, proxied) self.access_logger = logging.getLogger(logger_name) def log(self, request): pass synapse-0.24.0/synapse/metrics/000077500000000000000000000000001317335640100164125ustar00rootroot00000000000000synapse-0.24.0/synapse/metrics/__init__.py000066400000000000000000000126731317335640100205340ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import functools import time import gc from twisted.internet import reactor from .metric import ( CounterMetric, CallbackMetric, DistributionMetric, CacheMetric, MemoryUsageMetric, ) from .process_collector import register_process_collector logger = logging.getLogger(__name__) all_metrics = [] all_collectors = [] class Metrics(object): """ A single Metrics object gives a (mutable) slice view of the all_metrics dict, allowing callers to easily register new metrics that are namespaced nicely.""" def __init__(self, name): self.name_prefix = name def make_subspace(self, name): return Metrics("%s_%s" % (self.name_prefix, name)) def register_collector(self, func): all_collectors.append(func) def _register(self, metric_class, name, *args, **kwargs): full_name = "%s_%s" % (self.name_prefix, name) metric = metric_class(full_name, *args, **kwargs) all_metrics.append(metric) return metric def register_counter(self, *args, **kwargs): return self._register(CounterMetric, *args, **kwargs) def register_callback(self, *args, **kwargs): return self._register(CallbackMetric, *args, **kwargs) def register_distribution(self, *args, **kwargs): return self._register(DistributionMetric, *args, **kwargs) def register_cache(self, *args, **kwargs): return self._register(CacheMetric, *args, **kwargs) def register_memory_metrics(hs): try: import psutil process = psutil.Process() process.memory_info().rss except (ImportError, AttributeError): logger.warn( "psutil is not installed or incorrect version." " Disabling memory metrics." ) return metric = MemoryUsageMetric(hs, psutil) all_metrics.append(metric) def get_metrics_for(pkg_name): """ Returns a Metrics instance for conveniently creating metrics namespaced with the given name prefix. """ # Convert a "package.name" to "package_name" because Prometheus doesn't # let us use . in metric names return Metrics(pkg_name.replace(".", "_")) def render_all(): strs = [] for collector in all_collectors: collector() for metric in all_metrics: try: strs += metric.render() except Exception: strs += ["# FAILED to render"] logger.exception("Failed to render metric") strs.append("") # to generate a final CRLF return "\n".join(strs) register_process_collector(get_metrics_for("process")) python_metrics = get_metrics_for("python") gc_time = python_metrics.register_distribution("gc_time", labels=["gen"]) gc_unreachable = python_metrics.register_counter("gc_unreachable_total", labels=["gen"]) python_metrics.register_callback( "gc_counts", lambda: {(i,): v for i, v in enumerate(gc.get_count())}, labels=["gen"] ) reactor_metrics = get_metrics_for("python.twisted.reactor") tick_time = reactor_metrics.register_distribution("tick_time") pending_calls_metric = reactor_metrics.register_distribution("pending_calls") def runUntilCurrentTimer(func): @functools.wraps(func) def f(*args, **kwargs): now = reactor.seconds() num_pending = 0 # _newTimedCalls is one long list of *all* pending calls. Below loop # is based off of impl of reactor.runUntilCurrent for delayed_call in reactor._newTimedCalls: if delayed_call.time > now: break if delayed_call.delayed_time > 0: continue num_pending += 1 num_pending += len(reactor.threadCallQueue) start = time.time() * 1000 ret = func(*args, **kwargs) end = time.time() * 1000 tick_time.inc_by(end - start) pending_calls_metric.inc_by(num_pending) # Check if we need to do a manual GC (since its been disabled), and do # one if necessary. threshold = gc.get_threshold() counts = gc.get_count() for i in (2, 1, 0): if threshold[i] < counts[i]: logger.info("Collecting gc %d", i) start = time.time() * 1000 unreachable = gc.collect(i) end = time.time() * 1000 gc_time.inc_by(end - start, i) gc_unreachable.inc_by(unreachable, i) return ret return f try: # Ensure the reactor has all the attributes we expect reactor.runUntilCurrent reactor._newTimedCalls reactor.threadCallQueue # runUntilCurrent is called when we have pending calls. It is called once # per iteratation after fd polling. reactor.runUntilCurrent = runUntilCurrentTimer(reactor.runUntilCurrent) # We manually run the GC each reactor tick so that we can get some metrics # about time spent doing GC, gc.disable() except AttributeError: pass synapse-0.24.0/synapse/metrics/metric.py000066400000000000000000000136721317335640100202600ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from itertools import chain # TODO(paul): I can't believe Python doesn't have one of these def map_concat(func, items): # flatten a list-of-lists return list(chain.from_iterable(map(func, items))) class BaseMetric(object): def __init__(self, name, labels=[]): self.name = name self.labels = labels # OK not to clone as we never write it def dimension(self): return len(self.labels) def is_scalar(self): return not len(self.labels) def _render_labelvalue(self, value): # TODO: some kind of value escape return '"%s"' % (value) def _render_key(self, values): if self.is_scalar(): return "" return "{%s}" % ( ",".join(["%s=%s" % (k, self._render_labelvalue(v)) for k, v in zip(self.labels, values)]) ) class CounterMetric(BaseMetric): """The simplest kind of metric; one that stores a monotonically-increasing integer that counts events.""" def __init__(self, *args, **kwargs): super(CounterMetric, self).__init__(*args, **kwargs) self.counts = {} # Scalar metrics are never empty if self.is_scalar(): self.counts[()] = 0 def inc_by(self, incr, *values): if len(values) != self.dimension(): raise ValueError( "Expected as many values to inc() as labels (%d)" % (self.dimension()) ) # TODO: should assert that the tag values are all strings if values not in self.counts: self.counts[values] = incr else: self.counts[values] += incr def inc(self, *values): self.inc_by(1, *values) def render_item(self, k): return ["%s%s %d" % (self.name, self._render_key(k), self.counts[k])] def render(self): return map_concat(self.render_item, sorted(self.counts.keys())) class CallbackMetric(BaseMetric): """A metric that returns the numeric value returned by a callback whenever it is rendered. Typically this is used to implement gauges that yield the size or other state of some in-memory object by actively querying it.""" def __init__(self, name, callback, labels=[]): super(CallbackMetric, self).__init__(name, labels=labels) self.callback = callback def render(self): value = self.callback() if self.is_scalar(): return ["%s %.12g" % (self.name, value)] return ["%s%s %.12g" % (self.name, self._render_key(k), value[k]) for k in sorted(value.keys())] class DistributionMetric(object): """A combination of an event counter and an accumulator, which counts both the number of events and accumulates the total value. Typically this could be used to keep track of method-running times, or other distributions of values that occur in discrete occurances. TODO(paul): Try to export some heatmap-style stats? """ def __init__(self, name, *args, **kwargs): self.counts = CounterMetric(name + ":count", **kwargs) self.totals = CounterMetric(name + ":total", **kwargs) def inc_by(self, inc, *values): self.counts.inc(*values) self.totals.inc_by(inc, *values) def render(self): return self.counts.render() + self.totals.render() class CacheMetric(object): __slots__ = ("name", "cache_name", "hits", "misses", "size_callback") def __init__(self, name, size_callback, cache_name): self.name = name self.cache_name = cache_name self.hits = 0 self.misses = 0 self.size_callback = size_callback def inc_hits(self): self.hits += 1 def inc_misses(self): self.misses += 1 def render(self): size = self.size_callback() hits = self.hits total = self.misses + self.hits return [ """%s:hits{name="%s"} %d""" % (self.name, self.cache_name, hits), """%s:total{name="%s"} %d""" % (self.name, self.cache_name, total), """%s:size{name="%s"} %d""" % (self.name, self.cache_name, size), ] class MemoryUsageMetric(object): """Keeps track of the current memory usage, using psutil. The class will keep the current min/max/sum/counts of rss over the last WINDOW_SIZE_SEC, by polling UPDATE_HZ times per second """ UPDATE_HZ = 2 # number of times to get memory per second WINDOW_SIZE_SEC = 30 # the size of the window in seconds def __init__(self, hs, psutil): clock = hs.get_clock() self.memory_snapshots = [] self.process = psutil.Process() clock.looping_call(self._update_curr_values, 1000 / self.UPDATE_HZ) def _update_curr_values(self): max_size = self.UPDATE_HZ * self.WINDOW_SIZE_SEC self.memory_snapshots.append(self.process.memory_info().rss) self.memory_snapshots[:] = self.memory_snapshots[-max_size:] def render(self): if not self.memory_snapshots: return [] max_rss = max(self.memory_snapshots) min_rss = min(self.memory_snapshots) sum_rss = sum(self.memory_snapshots) len_rss = len(self.memory_snapshots) return [ "process_psutil_rss:max %d" % max_rss, "process_psutil_rss:min %d" % min_rss, "process_psutil_rss:total %d" % sum_rss, "process_psutil_rss:count %d" % len_rss, ] synapse-0.24.0/synapse/metrics/process_collector.py000066400000000000000000000073151317335640100225160ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os TICKS_PER_SEC = 100 BYTES_PER_PAGE = 4096 HAVE_PROC_STAT = os.path.exists("/proc/stat") HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat") HAVE_PROC_SELF_LIMITS = os.path.exists("/proc/self/limits") HAVE_PROC_SELF_FD = os.path.exists("/proc/self/fd") # Field indexes from /proc/self/stat, taken from the proc(5) manpage STAT_FIELDS = { "utime": 14, "stime": 15, "starttime": 22, "vsize": 23, "rss": 24, } stats = {} # In order to report process_start_time_seconds we need to know the # machine's boot time, because the value in /proc/self/stat is relative to # this boot_time = None if HAVE_PROC_STAT: with open("/proc/stat") as _procstat: for line in _procstat: if line.startswith("btime "): boot_time = int(line.split()[1]) def update_resource_metrics(): if HAVE_PROC_SELF_STAT: global stats with open("/proc/self/stat") as s: line = s.read() # line is PID (command) more stats go here ... raw_stats = line.split(") ", 1)[1].split(" ") for (name, index) in STAT_FIELDS.iteritems(): # subtract 3 from the index, because proc(5) is 1-based, and # we've lost the first two fields in PID and COMMAND above stats[name] = int(raw_stats[index - 3]) def _count_fds(): # Not every OS will have a /proc/self/fd directory if not HAVE_PROC_SELF_FD: return 0 return len(os.listdir("/proc/self/fd")) def register_process_collector(process_metrics): process_metrics.register_collector(update_resource_metrics) if HAVE_PROC_SELF_STAT: process_metrics.register_callback( "cpu_user_seconds_total", lambda: float(stats["utime"]) / TICKS_PER_SEC ) process_metrics.register_callback( "cpu_system_seconds_total", lambda: float(stats["stime"]) / TICKS_PER_SEC ) process_metrics.register_callback( "cpu_seconds_total", lambda: (float(stats["utime"] + stats["stime"])) / TICKS_PER_SEC ) process_metrics.register_callback( "virtual_memory_bytes", lambda: int(stats["vsize"]) ) process_metrics.register_callback( "resident_memory_bytes", lambda: int(stats["rss"]) * BYTES_PER_PAGE ) process_metrics.register_callback( "start_time_seconds", lambda: boot_time + int(stats["starttime"]) / TICKS_PER_SEC ) if HAVE_PROC_SELF_FD: process_metrics.register_callback( "open_fds", lambda: _count_fds() ) if HAVE_PROC_SELF_LIMITS: def _get_max_fds(): with open("/proc/self/limits") as limits: for line in limits: if not line.startswith("Max open files "): continue # Line is Max open files $SOFT $HARD return int(line.split()[3]) return None process_metrics.register_callback( "max_fds", lambda: _get_max_fds() ) synapse-0.24.0/synapse/metrics/resource.py000066400000000000000000000022151317335640100206130ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.web.resource import Resource import synapse.metrics METRICS_PREFIX = "/_synapse/metrics" class MetricsResource(Resource): isLeaf = True def __init__(self, hs): Resource.__init__(self) # Resource is old-style, so no super() self.hs = hs def render_GET(self, request): response = synapse.metrics.render_all() request.setHeader("Content-Type", "text/plain") request.setHeader("Content-Length", str(len(response))) # Encode as UTF-8 (default) return response.encode() synapse-0.24.0/synapse/notifier.py000066400000000000000000000506421317335640100171440ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014 - 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.constants import EventTypes, Membership from synapse.api.errors import AuthError from synapse.handlers.presence import format_user_presence_state from synapse.util import DeferredTimedOutError from synapse.util.logutils import log_function from synapse.util.async import ObservableDeferred from synapse.util.logcontext import PreserveLoggingContext, preserve_fn from synapse.util.metrics import Measure from synapse.types import StreamToken from synapse.visibility import filter_events_for_client import synapse.metrics from collections import namedtuple import logging logger = logging.getLogger(__name__) metrics = synapse.metrics.get_metrics_for(__name__) notified_events_counter = metrics.register_counter("notified_events") users_woken_by_stream_counter = metrics.register_counter( "users_woken_by_stream", labels=["stream"] ) # TODO(paul): Should be shared somewhere def count(func, l): """Return the number of items in l for which func returns true.""" n = 0 for x in l: if func(x): n += 1 return n class _NotificationListener(object): """ This represents a single client connection to the events stream. The events stream handler will have yielded to the deferred, so to notify the handler it is sufficient to resolve the deferred. """ __slots__ = ["deferred"] def __init__(self, deferred): self.deferred = deferred class _NotifierUserStream(object): """This represents a user connected to the event stream. It tracks the most recent stream token for that user. At a given point a user may have a number of streams listening for events. This listener will also keep track of which rooms it is listening in so that it can remove itself from the indexes in the Notifier class. """ def __init__(self, user_id, rooms, current_token, time_now_ms): self.user_id = user_id self.rooms = set(rooms) self.current_token = current_token # The last token for which we should wake up any streams that have a # token that comes before it. This gets updated everytime we get poked. # We start it at the current token since if we get any streams # that have a token from before we have no idea whether they should be # woken up or not, so lets just wake them up. self.last_notified_token = current_token self.last_notified_ms = time_now_ms with PreserveLoggingContext(): self.notify_deferred = ObservableDeferred(defer.Deferred()) def notify(self, stream_key, stream_id, time_now_ms): """Notify any listeners for this user of a new event from an event source. Args: stream_key(str): The stream the event came from. stream_id(str): The new id for the stream the event came from. time_now_ms(int): The current time in milliseconds. """ self.current_token = self.current_token.copy_and_advance( stream_key, stream_id ) self.last_notified_token = self.current_token self.last_notified_ms = time_now_ms noify_deferred = self.notify_deferred users_woken_by_stream_counter.inc(stream_key) with PreserveLoggingContext(): self.notify_deferred = ObservableDeferred(defer.Deferred()) noify_deferred.callback(self.current_token) def remove(self, notifier): """ Remove this listener from all the indexes in the Notifier it knows about. """ for room in self.rooms: lst = notifier.room_to_user_streams.get(room, set()) lst.discard(self) notifier.user_to_user_stream.pop(self.user_id) def count_listeners(self): return len(self.notify_deferred.observers()) def new_listener(self, token): """Returns a deferred that is resolved when there is a new token greater than the given token. Args: token: The token from which we are streaming from, i.e. we shouldn't notify for things that happened before this. """ # Immediately wake up stream if something has already since happened # since their last token. if self.last_notified_token.is_after(token): return _NotificationListener(defer.succeed(self.current_token)) else: return _NotificationListener(self.notify_deferred.observe()) class EventStreamResult(namedtuple("EventStreamResult", ("events", "tokens"))): def __nonzero__(self): return bool(self.events) class Notifier(object): """ This class is responsible for notifying any listeners when there are new events available for it. Primarily used from the /events stream. """ UNUSED_STREAM_EXPIRY_MS = 10 * 60 * 1000 def __init__(self, hs): self.user_to_user_stream = {} self.room_to_user_streams = {} self.event_sources = hs.get_event_sources() self.store = hs.get_datastore() self.pending_new_room_events = [] self.replication_callbacks = [] self.clock = hs.get_clock() self.appservice_handler = hs.get_application_service_handler() if hs.should_send_federation(): self.federation_sender = hs.get_federation_sender() else: self.federation_sender = None self.state_handler = hs.get_state_handler() self.clock.looping_call( self.remove_expired_streams, self.UNUSED_STREAM_EXPIRY_MS ) self.replication_deferred = ObservableDeferred(defer.Deferred()) # This is not a very cheap test to perform, but it's only executed # when rendering the metrics page, which is likely once per minute at # most when scraping it. def count_listeners(): all_user_streams = set() for x in self.room_to_user_streams.values(): all_user_streams |= x for x in self.user_to_user_stream.values(): all_user_streams.add(x) return sum(stream.count_listeners() for stream in all_user_streams) metrics.register_callback("listeners", count_listeners) metrics.register_callback( "rooms", lambda: count(bool, self.room_to_user_streams.values()), ) metrics.register_callback( "users", lambda: len(self.user_to_user_stream), ) def add_replication_callback(self, cb): """Add a callback that will be called when some new data is available. Callback is not given any arguments. """ self.replication_callbacks.append(cb) def on_new_room_event(self, event, room_stream_id, max_room_stream_id, extra_users=[]): """ Used by handlers to inform the notifier something has happened in the room, room event wise. This triggers the notifier to wake up any listeners that are listening to the room, and any listeners for the users in the `extra_users` param. The events can be peristed out of order. The notifier will wait until all previous events have been persisted before notifying the client streams. """ self.pending_new_room_events.append(( room_stream_id, event, extra_users )) self._notify_pending_new_room_events(max_room_stream_id) self.notify_replication() def _notify_pending_new_room_events(self, max_room_stream_id): """Notify for the room events that were queued waiting for a previous event to be persisted. Args: max_room_stream_id(int): The highest stream_id below which all events have been persisted. """ pending = self.pending_new_room_events self.pending_new_room_events = [] for room_stream_id, event, extra_users in pending: if room_stream_id > max_room_stream_id: self.pending_new_room_events.append(( room_stream_id, event, extra_users )) else: self._on_new_room_event(event, room_stream_id, extra_users) def _on_new_room_event(self, event, room_stream_id, extra_users=[]): """Notify any user streams that are interested in this room event""" # poke any interested application service. preserve_fn(self.appservice_handler.notify_interested_services)( room_stream_id ) if self.federation_sender: preserve_fn(self.federation_sender.notify_new_events)( room_stream_id ) if event.type == EventTypes.Member and event.membership == Membership.JOIN: self._user_joined_room(event.state_key, event.room_id) self.on_new_event( "room_key", room_stream_id, users=extra_users, rooms=[event.room_id], ) def on_new_event(self, stream_key, new_token, users=[], rooms=[]): """ Used to inform listeners that something has happend event wise. Will wake up all listeners for the given users and rooms. """ with PreserveLoggingContext(): with Measure(self.clock, "on_new_event"): user_streams = set() for user in users: user_stream = self.user_to_user_stream.get(str(user)) if user_stream is not None: user_streams.add(user_stream) for room in rooms: user_streams |= self.room_to_user_streams.get(room, set()) time_now_ms = self.clock.time_msec() for user_stream in user_streams: try: user_stream.notify(stream_key, new_token, time_now_ms) except: logger.exception("Failed to notify listener") self.notify_replication() def on_new_replication_data(self): """Used to inform replication listeners that something has happend without waking up any of the normal user event streams""" with PreserveLoggingContext(): self.notify_replication() @defer.inlineCallbacks def wait_for_events(self, user_id, timeout, callback, room_ids=None, from_token=StreamToken.START): """Wait until the callback returns a non empty response or the timeout fires. """ user_stream = self.user_to_user_stream.get(user_id) if user_stream is None: current_token = yield self.event_sources.get_current_token() if room_ids is None: room_ids = yield self.store.get_rooms_for_user(user_id) user_stream = _NotifierUserStream( user_id=user_id, rooms=room_ids, current_token=current_token, time_now_ms=self.clock.time_msec(), ) self._register_with_keys(user_stream) result = None prev_token = from_token if timeout: end_time = self.clock.time_msec() + timeout while not result: try: now = self.clock.time_msec() if end_time <= now: break # Now we wait for the _NotifierUserStream to be told there # is a new token. listener = user_stream.new_listener(prev_token) with PreserveLoggingContext(): yield self.clock.time_bound_deferred( listener.deferred, time_out=(end_time - now) / 1000. ) current_token = user_stream.current_token result = yield callback(prev_token, current_token) if result: break # Update the prev_token to the current_token since nothing # has happened between the old prev_token and the current_token prev_token = current_token except DeferredTimedOutError: break except defer.CancelledError: break if result is None: # This happened if there was no timeout or if the timeout had # already expired. current_token = user_stream.current_token result = yield callback(prev_token, current_token) defer.returnValue(result) @defer.inlineCallbacks def get_events_for(self, user, pagination_config, timeout, only_keys=None, is_guest=False, explicit_room_id=None): """ For the given user and rooms, return any new events for them. If there are no new events wait for up to `timeout` milliseconds for any new events to happen before returning. If `only_keys` is not None, events from keys will be sent down. If explicit_room_id is not set, the user's joined rooms will be polled for events. If explicit_room_id is set, that room will be polled for events only if it is world readable or the user has joined the room. """ from_token = pagination_config.from_token if not from_token: from_token = yield self.event_sources.get_current_token() limit = pagination_config.limit room_ids, is_joined = yield self._get_room_ids(user, explicit_room_id) is_peeking = not is_joined @defer.inlineCallbacks def check_for_updates(before_token, after_token): if not after_token.is_after(before_token): defer.returnValue(EventStreamResult([], (from_token, from_token))) events = [] end_token = from_token for name, source in self.event_sources.sources.items(): keyname = "%s_key" % name before_id = getattr(before_token, keyname) after_id = getattr(after_token, keyname) if before_id == after_id: continue if only_keys and name not in only_keys: continue new_events, new_key = yield source.get_new_events( user=user, from_key=getattr(from_token, keyname), limit=limit, is_guest=is_peeking, room_ids=room_ids, explicit_room_id=explicit_room_id, ) if name == "room": new_events = yield filter_events_for_client( self.store, user.to_string(), new_events, is_peeking=is_peeking, ) elif name == "presence": now = self.clock.time_msec() new_events[:] = [ { "type": "m.presence", "content": format_user_presence_state(event, now), } for event in new_events ] events.extend(new_events) end_token = end_token.copy_and_replace(keyname, new_key) defer.returnValue(EventStreamResult(events, (from_token, end_token))) user_id_for_stream = user.to_string() if is_peeking: # Internally, the notifier keeps an event stream per user_id. # This is used by both /sync and /events. # We want /events to be used for peeking independently of /sync, # without polluting its contents. So we invent an illegal user ID # (which thus cannot clash with any real users) for keying peeking # over /events. # # I am sorry for what I have done. user_id_for_stream = "_PEEKING_%s_%s" % ( explicit_room_id, user_id_for_stream ) result = yield self.wait_for_events( user_id_for_stream, timeout, check_for_updates, room_ids=room_ids, from_token=from_token, ) defer.returnValue(result) @defer.inlineCallbacks def _get_room_ids(self, user, explicit_room_id): joined_room_ids = yield self.store.get_rooms_for_user(user.to_string()) if explicit_room_id: if explicit_room_id in joined_room_ids: defer.returnValue(([explicit_room_id], True)) if (yield self._is_world_readable(explicit_room_id)): defer.returnValue(([explicit_room_id], False)) raise AuthError(403, "Non-joined access not allowed") defer.returnValue((joined_room_ids, True)) @defer.inlineCallbacks def _is_world_readable(self, room_id): state = yield self.state_handler.get_current_state( room_id, EventTypes.RoomHistoryVisibility, "", ) if state and "history_visibility" in state.content: defer.returnValue(state.content["history_visibility"] == "world_readable") else: defer.returnValue(False) @log_function def remove_expired_streams(self): time_now_ms = self.clock.time_msec() expired_streams = [] expire_before_ts = time_now_ms - self.UNUSED_STREAM_EXPIRY_MS for stream in self.user_to_user_stream.values(): if stream.count_listeners(): continue if stream.last_notified_ms < expire_before_ts: expired_streams.append(stream) for expired_stream in expired_streams: expired_stream.remove(self) @log_function def _register_with_keys(self, user_stream): self.user_to_user_stream[user_stream.user_id] = user_stream for room in user_stream.rooms: s = self.room_to_user_streams.setdefault(room, set()) s.add(user_stream) def _user_joined_room(self, user_id, room_id): new_user_stream = self.user_to_user_stream.get(user_id) if new_user_stream is not None: room_streams = self.room_to_user_streams.setdefault(room_id, set()) room_streams.add(new_user_stream) new_user_stream.rooms.add(room_id) def notify_replication(self): """Notify the any replication listeners that there's a new event""" with PreserveLoggingContext(): deferred = self.replication_deferred self.replication_deferred = ObservableDeferred(defer.Deferred()) deferred.callback(None) for cb in self.replication_callbacks: preserve_fn(cb)() @defer.inlineCallbacks def wait_for_replication(self, callback, timeout): """Wait for an event to happen. Args: callback: Gets called whenever an event happens. If this returns a truthy value then ``wait_for_replication`` returns, otherwise it waits for another event. timeout: How many milliseconds to wait for callback return a truthy value. Returns: A deferred that resolves with the value returned by the callback. """ listener = _NotificationListener(None) end_time = self.clock.time_msec() + timeout while True: listener.deferred = self.replication_deferred.observe() result = yield callback() if result: break now = self.clock.time_msec() if end_time <= now: break try: with PreserveLoggingContext(): yield self.clock.time_bound_deferred( listener.deferred, time_out=(end_time - now) / 1000. ) except DeferredTimedOutError: break except defer.CancelledError: break defer.returnValue(result) synapse-0.24.0/synapse/push/000077500000000000000000000000001317335640100157235ustar00rootroot00000000000000synapse-0.24.0/synapse/push/__init__.py000066400000000000000000000013401317335640100200320ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. class PusherConfigException(Exception): def __init__(self, msg): super(PusherConfigException, self).__init__(msg) synapse-0.24.0/synapse/push/action_generator.py000066400000000000000000000034361317335640100216260ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from .bulk_push_rule_evaluator import BulkPushRuleEvaluator from synapse.util.metrics import Measure import logging logger = logging.getLogger(__name__) class ActionGenerator(object): def __init__(self, hs): self.hs = hs self.clock = hs.get_clock() self.store = hs.get_datastore() self.bulk_evaluator = BulkPushRuleEvaluator(hs) # really we want to get all user ids and all profile tags too, # since we want the actions for each profile tag for every user and # also actions for a client with no profile tag for each user. # Currently the event stream doesn't support profile tags on an # event stream, so we just run the rules for a client with no profile # tag (ie. we just need all the users). @defer.inlineCallbacks def handle_push_actions_for_event(self, event, context): with Measure(self.clock, "action_for_event_by_user"): actions_by_user = yield self.bulk_evaluator.action_for_event_by_user( event, context ) context.push_actions = [ (uid, actions) for uid, actions in actions_by_user.iteritems() ] synapse-0.24.0/synapse/push/baserules.py000066400000000000000000000267031317335640100202720ustar00rootroot00000000000000# Copyright 2015, 2016 OpenMarket Ltd # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.push.rulekinds import PRIORITY_CLASS_MAP, PRIORITY_CLASS_INVERSE_MAP import copy def list_with_base_rules(rawrules): """Combine the list of rules set by the user with the default push rules Args: rawrules(list): The rules the user has modified or set. Returns: A new list with the rules set by the user combined with the defaults. """ ruleslist = [] # Grab the base rules that the user has modified. # The modified base rules have a priority_class of -1. modified_base_rules = { r['rule_id']: r for r in rawrules if r['priority_class'] < 0 } # Remove the modified base rules from the list, They'll be added back # in the default postions in the list. rawrules = [r for r in rawrules if r['priority_class'] >= 0] # shove the server default rules for each kind onto the end of each current_prio_class = PRIORITY_CLASS_INVERSE_MAP.keys()[-1] ruleslist.extend(make_base_prepend_rules( PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules )) for r in rawrules: if r['priority_class'] < current_prio_class: while r['priority_class'] < current_prio_class: ruleslist.extend(make_base_append_rules( PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules, )) current_prio_class -= 1 if current_prio_class > 0: ruleslist.extend(make_base_prepend_rules( PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules, )) ruleslist.append(r) while current_prio_class > 0: ruleslist.extend(make_base_append_rules( PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules, )) current_prio_class -= 1 if current_prio_class > 0: ruleslist.extend(make_base_prepend_rules( PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules, )) return ruleslist def make_base_append_rules(kind, modified_base_rules): rules = [] if kind == 'override': rules = BASE_APPEND_OVERRIDE_RULES elif kind == 'underride': rules = BASE_APPEND_UNDERRIDE_RULES elif kind == 'content': rules = BASE_APPEND_CONTENT_RULES # Copy the rules before modifying them rules = copy.deepcopy(rules) for r in rules: # Only modify the actions, keep the conditions the same. modified = modified_base_rules.get(r['rule_id']) if modified: r['actions'] = modified['actions'] return rules def make_base_prepend_rules(kind, modified_base_rules): rules = [] if kind == 'override': rules = BASE_PREPEND_OVERRIDE_RULES # Copy the rules before modifying them rules = copy.deepcopy(rules) for r in rules: # Only modify the actions, keep the conditions the same. modified = modified_base_rules.get(r['rule_id']) if modified: r['actions'] = modified['actions'] return rules BASE_APPEND_CONTENT_RULES = [ { 'rule_id': 'global/content/.m.rule.contains_user_name', 'conditions': [ { 'kind': 'event_match', 'key': 'content.body', 'pattern_type': 'user_localpart' } ], 'actions': [ 'notify', { 'set_tweak': 'sound', 'value': 'default', }, { 'set_tweak': 'highlight' } ] }, ] BASE_PREPEND_OVERRIDE_RULES = [ { 'rule_id': 'global/override/.m.rule.master', 'enabled': False, 'conditions': [], 'actions': [ "dont_notify" ] } ] BASE_APPEND_OVERRIDE_RULES = [ { 'rule_id': 'global/override/.m.rule.suppress_notices', 'conditions': [ { 'kind': 'event_match', 'key': 'content.msgtype', 'pattern': 'm.notice', '_id': '_suppress_notices', } ], 'actions': [ 'dont_notify', ] }, # NB. .m.rule.invite_for_me must be higher prio than .m.rule.member_event # otherwise invites will be matched by .m.rule.member_event { 'rule_id': 'global/override/.m.rule.invite_for_me', 'conditions': [ { 'kind': 'event_match', 'key': 'type', 'pattern': 'm.room.member', '_id': '_member', }, { 'kind': 'event_match', 'key': 'content.membership', 'pattern': 'invite', '_id': '_invite_member', }, { 'kind': 'event_match', 'key': 'state_key', 'pattern_type': 'user_id' }, ], 'actions': [ 'notify', { 'set_tweak': 'sound', 'value': 'default' }, { 'set_tweak': 'highlight', 'value': False } ] }, # Will we sometimes want to know about people joining and leaving? # Perhaps: if so, this could be expanded upon. Seems the most usual case # is that we don't though. We add this override rule so that even if # the room rule is set to notify, we don't get notifications about # join/leave/avatar/displayname events. # See also: https://matrix.org/jira/browse/SYN-607 { 'rule_id': 'global/override/.m.rule.member_event', 'conditions': [ { 'kind': 'event_match', 'key': 'type', 'pattern': 'm.room.member', '_id': '_member', } ], 'actions': [ 'dont_notify' ] }, # This was changed from underride to override so it's closer in priority # to the content rules where the user name highlight rule lives. This # way a room rule is lower priority than both but a custom override rule # is higher priority than both. { 'rule_id': 'global/override/.m.rule.contains_display_name', 'conditions': [ { 'kind': 'contains_display_name' } ], 'actions': [ 'notify', { 'set_tweak': 'sound', 'value': 'default' }, { 'set_tweak': 'highlight' } ] }, { 'rule_id': 'global/override/.m.rule.roomnotif', 'conditions': [ { 'kind': 'event_match', 'key': 'content.body', 'pattern': '@room', '_id': '_roomnotif_content', }, { 'kind': 'sender_notification_permission', 'key': 'room', '_id': '_roomnotif_pl', }, ], 'actions': [ 'notify', { 'set_tweak': 'highlight', 'value': True, } ] } ] BASE_APPEND_UNDERRIDE_RULES = [ { 'rule_id': 'global/underride/.m.rule.call', 'conditions': [ { 'kind': 'event_match', 'key': 'type', 'pattern': 'm.call.invite', '_id': '_call', } ], 'actions': [ 'notify', { 'set_tweak': 'sound', 'value': 'ring' }, { 'set_tweak': 'highlight', 'value': False } ] }, # XXX: once m.direct is standardised everywhere, we should use it to detect # a DM from the user's perspective rather than this heuristic. { 'rule_id': 'global/underride/.m.rule.room_one_to_one', 'conditions': [ { 'kind': 'room_member_count', 'is': '2', '_id': 'member_count', }, { 'kind': 'event_match', 'key': 'type', 'pattern': 'm.room.message', '_id': '_message', } ], 'actions': [ 'notify', { 'set_tweak': 'sound', 'value': 'default' }, { 'set_tweak': 'highlight', 'value': False } ] }, # XXX: this is going to fire for events which aren't m.room.messages # but are encrypted (e.g. m.call.*)... { 'rule_id': 'global/underride/.m.rule.encrypted_room_one_to_one', 'conditions': [ { 'kind': 'room_member_count', 'is': '2', '_id': 'member_count', }, { 'kind': 'event_match', 'key': 'type', 'pattern': 'm.room.encrypted', '_id': '_encrypted', } ], 'actions': [ 'notify', { 'set_tweak': 'sound', 'value': 'default' }, { 'set_tweak': 'highlight', 'value': False } ] }, { 'rule_id': 'global/underride/.m.rule.message', 'conditions': [ { 'kind': 'event_match', 'key': 'type', 'pattern': 'm.room.message', '_id': '_message', } ], 'actions': [ 'notify', { 'set_tweak': 'highlight', 'value': False } ] }, # XXX: this is going to fire for events which aren't m.room.messages # but are encrypted (e.g. m.call.*)... { 'rule_id': 'global/underride/.m.rule.encrypted', 'conditions': [ { 'kind': 'event_match', 'key': 'type', 'pattern': 'm.room.encrypted', '_id': '_encrypted', } ], 'actions': [ 'notify', { 'set_tweak': 'highlight', 'value': False } ] } ] BASE_RULE_IDS = set() for r in BASE_APPEND_CONTENT_RULES: r['priority_class'] = PRIORITY_CLASS_MAP['content'] r['default'] = True BASE_RULE_IDS.add(r['rule_id']) for r in BASE_PREPEND_OVERRIDE_RULES: r['priority_class'] = PRIORITY_CLASS_MAP['override'] r['default'] = True BASE_RULE_IDS.add(r['rule_id']) for r in BASE_APPEND_OVERRIDE_RULES: r['priority_class'] = PRIORITY_CLASS_MAP['override'] r['default'] = True BASE_RULE_IDS.add(r['rule_id']) for r in BASE_APPEND_UNDERRIDE_RULES: r['priority_class'] = PRIORITY_CLASS_MAP['underride'] r['default'] = True BASE_RULE_IDS.add(r['rule_id']) synapse-0.24.0/synapse/push/bulk_push_rule_evaluator.py000066400000000000000000000437061317335640100234140ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015 OpenMarket Ltd # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from twisted.internet import defer from .push_rule_evaluator import PushRuleEvaluatorForEvent from synapse.event_auth import get_user_power_level from synapse.api.constants import EventTypes, Membership from synapse.metrics import get_metrics_for from synapse.util.caches import metrics as cache_metrics from synapse.util.caches.descriptors import cached from synapse.util.async import Linearizer from synapse.state import POWER_KEY from collections import namedtuple logger = logging.getLogger(__name__) rules_by_room = {} push_metrics = get_metrics_for(__name__) push_rules_invalidation_counter = push_metrics.register_counter( "push_rules_invalidation_counter" ) push_rules_state_size_counter = push_metrics.register_counter( "push_rules_state_size_counter" ) # Measures whether we use the fast path of using state deltas, or if we have to # recalculate from scratch push_rules_delta_state_cache_metric = cache_metrics.register_cache( "cache", size_callback=lambda: 0, # Meaningless size, as this isn't a cache that stores values cache_name="push_rules_delta_state_cache_metric", ) class BulkPushRuleEvaluator(object): """Calculates the outcome of push rules for an event for all users in the room at once. """ def __init__(self, hs): self.hs = hs self.store = hs.get_datastore() self.auth = hs.get_auth() self.room_push_rule_cache_metrics = cache_metrics.register_cache( "cache", size_callback=lambda: 0, # There's not good value for this cache_name="room_push_rule_cache", ) @defer.inlineCallbacks def _get_rules_for_event(self, event, context): """This gets the rules for all users in the room at the time of the event, as well as the push rules for the invitee if the event is an invite. Returns: dict of user_id -> push_rules """ room_id = event.room_id rules_for_room = self._get_rules_for_room(room_id) rules_by_user = yield rules_for_room.get_rules(event, context) # if this event is an invite event, we may need to run rules for the user # who's been invited, otherwise they won't get told they've been invited if event.type == 'm.room.member' and event.content['membership'] == 'invite': invited = event.state_key if invited and self.hs.is_mine_id(invited): has_pusher = yield self.store.user_has_pusher(invited) if has_pusher: rules_by_user = dict(rules_by_user) rules_by_user[invited] = yield self.store.get_push_rules_for_user( invited ) defer.returnValue(rules_by_user) @cached() def _get_rules_for_room(self, room_id): """Get the current RulesForRoom object for the given room id Returns: RulesForRoom """ # It's important that RulesForRoom gets added to self._get_rules_for_room.cache # before any lookup methods get called on it as otherwise there may be # a race if invalidate_all gets called (which assumes its in the cache) return RulesForRoom( self.hs, room_id, self._get_rules_for_room.cache, self.room_push_rule_cache_metrics, ) @defer.inlineCallbacks def _get_power_levels_and_sender_level(self, event, context): pl_event_id = context.prev_state_ids.get(POWER_KEY) if pl_event_id: # fastpath: if there's a power level event, that's all we need, and # not having a power level event is an extreme edge case pl_event = yield self.store.get_event(pl_event_id) auth_events = {POWER_KEY: pl_event} else: auth_events_ids = yield self.auth.compute_auth_events( event, context.prev_state_ids, for_verification=False, ) auth_events = yield self.store.get_events(auth_events_ids) auth_events = { (e.type, e.state_key): e for e in auth_events.itervalues() } sender_level = get_user_power_level(event.sender, auth_events) pl_event = auth_events.get(POWER_KEY) defer.returnValue((pl_event.content if pl_event else {}, sender_level)) @defer.inlineCallbacks def action_for_event_by_user(self, event, context): """Given an event and context, evaluate the push rules and return the results Returns: dict of user_id -> action """ rules_by_user = yield self._get_rules_for_event(event, context) actions_by_user = {} room_members = yield self.store.get_joined_users_from_context( event, context ) (power_levels, sender_power_level) = ( yield self._get_power_levels_and_sender_level(event, context) ) evaluator = PushRuleEvaluatorForEvent( event, len(room_members), sender_power_level, power_levels, ) condition_cache = {} for uid, rules in rules_by_user.iteritems(): if event.sender == uid: continue if not event.is_state(): is_ignored = yield self.store.is_ignored_by(event.sender, uid) if is_ignored: continue display_name = None profile_info = room_members.get(uid) if profile_info: display_name = profile_info.display_name if not display_name: # Handle the case where we are pushing a membership event to # that user, as they might not be already joined. if event.type == EventTypes.Member and event.state_key == uid: display_name = event.content.get("displayname", None) for rule in rules: if 'enabled' in rule and not rule['enabled']: continue matches = _condition_checker( evaluator, rule['conditions'], uid, display_name, condition_cache ) if matches: actions = [x for x in rule['actions'] if x != 'dont_notify'] if actions and 'notify' in actions: actions_by_user[uid] = actions break defer.returnValue(actions_by_user) def _condition_checker(evaluator, conditions, uid, display_name, cache): for cond in conditions: _id = cond.get("_id", None) if _id: res = cache.get(_id, None) if res is False: return False elif res is True: continue res = evaluator.matches(cond, uid, display_name) if _id: cache[_id] = bool(res) if not res: return False return True class RulesForRoom(object): """Caches push rules for users in a room. This efficiently handles users joining/leaving the room by not invalidating the entire cache for the room. """ def __init__(self, hs, room_id, rules_for_room_cache, room_push_rule_cache_metrics): """ Args: hs (HomeServer) room_id (str) rules_for_room_cache(Cache): The cache object that caches these RoomsForUser objects. room_push_rule_cache_metrics (CacheMetric) """ self.room_id = room_id self.is_mine_id = hs.is_mine_id self.store = hs.get_datastore() self.room_push_rule_cache_metrics = room_push_rule_cache_metrics self.linearizer = Linearizer(name="rules_for_room") self.member_map = {} # event_id -> (user_id, state) self.rules_by_user = {} # user_id -> rules # The last state group we updated the caches for. If the state_group of # a new event comes along, we know that we can just return the cached # result. # On invalidation of the rules themselves (if the user changes them), # we invalidate everything and set state_group to `object()` self.state_group = object() # A sequence number to keep track of when we're allowed to update the # cache. We bump the sequence number when we invalidate the cache. If # the sequence number changes while we're calculating stuff we should # not update the cache with it. self.sequence = 0 # A cache of user_ids that we *know* aren't interesting, e.g. user_ids # owned by AS's, or remote users, etc. (I.e. users we will never need to # calculate push for) # These never need to be invalidated as we will never set up push for # them. self.uninteresting_user_set = set() # We need to be clever on the invalidating caches callbacks, as # otherwise the invalidation callback holds a reference to the object, # potentially causing it to leak. # To get around this we pass a function that on invalidations looks ups # the RoomsForUser entry in the cache, rather than keeping a reference # to self around in the callback. self.invalidate_all_cb = _Invalidation(rules_for_room_cache, room_id) @defer.inlineCallbacks def get_rules(self, event, context): """Given an event context return the rules for all users who are currently in the room. """ state_group = context.state_group if state_group and self.state_group == state_group: logger.debug("Using cached rules for %r", self.room_id) self.room_push_rule_cache_metrics.inc_hits() defer.returnValue(self.rules_by_user) with (yield self.linearizer.queue(())): if state_group and self.state_group == state_group: logger.debug("Using cached rules for %r", self.room_id) self.room_push_rule_cache_metrics.inc_hits() defer.returnValue(self.rules_by_user) self.room_push_rule_cache_metrics.inc_misses() ret_rules_by_user = {} missing_member_event_ids = {} if state_group and self.state_group == context.prev_group: # If we have a simple delta then we can reuse most of the previous # results. ret_rules_by_user = self.rules_by_user current_state_ids = context.delta_ids push_rules_delta_state_cache_metric.inc_hits() else: current_state_ids = context.current_state_ids push_rules_delta_state_cache_metric.inc_misses() push_rules_state_size_counter.inc_by(len(current_state_ids)) logger.debug( "Looking for member changes in %r %r", state_group, current_state_ids ) # Loop through to see which member events we've seen and have rules # for and which we need to fetch for key in current_state_ids: typ, user_id = key if typ != EventTypes.Member: continue if user_id in self.uninteresting_user_set: continue if not self.is_mine_id(user_id): self.uninteresting_user_set.add(user_id) continue if self.store.get_if_app_services_interested_in_user(user_id): self.uninteresting_user_set.add(user_id) continue event_id = current_state_ids[key] res = self.member_map.get(event_id, None) if res: user_id, state = res if state == Membership.JOIN: rules = self.rules_by_user.get(user_id, None) if rules: ret_rules_by_user[user_id] = rules continue # If a user has left a room we remove their push rule. If they # joined then we readd it later in _update_rules_with_member_event_ids ret_rules_by_user.pop(user_id, None) missing_member_event_ids[user_id] = event_id if missing_member_event_ids: # If we have some memebr events we haven't seen, look them up # and fetch push rules for them if appropriate. logger.debug("Found new member events %r", missing_member_event_ids) yield self._update_rules_with_member_event_ids( ret_rules_by_user, missing_member_event_ids, state_group, event ) else: # The push rules didn't change but lets update the cache anyway self.update_cache( self.sequence, members={}, # There were no membership changes rules_by_user=ret_rules_by_user, state_group=state_group ) if logger.isEnabledFor(logging.DEBUG): logger.debug( "Returning push rules for %r %r", self.room_id, ret_rules_by_user.keys(), ) defer.returnValue(ret_rules_by_user) @defer.inlineCallbacks def _update_rules_with_member_event_ids(self, ret_rules_by_user, member_event_ids, state_group, event): """Update the partially filled rules_by_user dict by fetching rules for any newly joined users in the `member_event_ids` list. Args: ret_rules_by_user (dict): Partiallly filled dict of push rules. Gets updated with any new rules. member_event_ids (list): List of event ids for membership events that have happened since the last time we filled rules_by_user state_group: The state group we are currently computing push rules for. Used when updating the cache. """ sequence = self.sequence rows = yield self.store._simple_select_many_batch( table="room_memberships", column="event_id", iterable=member_event_ids.values(), retcols=('user_id', 'membership', 'event_id'), keyvalues={}, batch_size=500, desc="_get_rules_for_member_event_ids", ) members = { row["event_id"]: (row["user_id"], row["membership"]) for row in rows } # If the event is a join event then it will be in current state evnts # map but not in the DB, so we have to explicitly insert it. if event.type == EventTypes.Member: for event_id in member_event_ids.itervalues(): if event_id == event.event_id: members[event_id] = (event.state_key, event.membership) if logger.isEnabledFor(logging.DEBUG): logger.debug("Found members %r: %r", self.room_id, members.values()) interested_in_user_ids = set( user_id for user_id, membership in members.itervalues() if membership == Membership.JOIN ) logger.debug("Joined: %r", interested_in_user_ids) if_users_with_pushers = yield self.store.get_if_users_have_pushers( interested_in_user_ids, on_invalidate=self.invalidate_all_cb, ) user_ids = set( uid for uid, have_pusher in if_users_with_pushers.iteritems() if have_pusher ) logger.debug("With pushers: %r", user_ids) users_with_receipts = yield self.store.get_users_with_read_receipts_in_room( self.room_id, on_invalidate=self.invalidate_all_cb, ) logger.debug("With receipts: %r", users_with_receipts) # any users with pushers must be ours: they have pushers for uid in users_with_receipts: if uid in interested_in_user_ids: user_ids.add(uid) rules_by_user = yield self.store.bulk_get_push_rules( user_ids, on_invalidate=self.invalidate_all_cb, ) ret_rules_by_user.update( item for item in rules_by_user.iteritems() if item[0] is not None ) self.update_cache(sequence, members, ret_rules_by_user, state_group) def invalidate_all(self): # Note: Don't hand this function directly to an invalidation callback # as it keeps a reference to self and will stop this instance from being # GC'd if it gets dropped from the rules_to_user cache. Instead use # `self.invalidate_all_cb` logger.debug("Invalidating RulesForRoom for %r", self.room_id) self.sequence += 1 self.state_group = object() self.member_map = {} self.rules_by_user = {} push_rules_invalidation_counter.inc() def update_cache(self, sequence, members, rules_by_user, state_group): if sequence == self.sequence: self.member_map.update(members) self.rules_by_user = rules_by_user self.state_group = state_group class _Invalidation(namedtuple("_Invalidation", ("cache", "room_id"))): # We rely on _CacheContext implementing __eq__ and __hash__ sensibly, # which namedtuple does for us (i.e. two _CacheContext are the same if # their caches and keys match). This is important in particular to # dedupe when we add callbacks to lru cache nodes, otherwise the number # of callbacks would grow. def __call__(self): rules = self.cache.get(self.room_id, None, update_metrics=False) if rules: rules.invalidate_all() synapse-0.24.0/synapse/push/clientformat.py000066400000000000000000000062141317335640100207670ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.push.rulekinds import ( PRIORITY_CLASS_MAP, PRIORITY_CLASS_INVERSE_MAP ) import copy def format_push_rules_for_user(user, ruleslist): """Converts a list of rawrules and a enabled map into nested dictionaries to match the Matrix client-server format for push rules""" # We're going to be mutating this a lot, so do a deep copy ruleslist = copy.deepcopy(ruleslist) rules = {'global': {}, 'device': {}} rules['global'] = _add_empty_priority_class_arrays(rules['global']) for r in ruleslist: rulearray = None template_name = _priority_class_to_template_name(r['priority_class']) # Remove internal stuff. for c in r["conditions"]: c.pop("_id", None) pattern_type = c.pop("pattern_type", None) if pattern_type == "user_id": c["pattern"] = user.to_string() elif pattern_type == "user_localpart": c["pattern"] = user.localpart rulearray = rules['global'][template_name] template_rule = _rule_to_template(r) if template_rule: if 'enabled' in r: template_rule['enabled'] = r['enabled'] else: template_rule['enabled'] = True rulearray.append(template_rule) return rules def _add_empty_priority_class_arrays(d): for pc in PRIORITY_CLASS_MAP.keys(): d[pc] = [] return d def _rule_to_template(rule): unscoped_rule_id = None if 'rule_id' in rule: unscoped_rule_id = _rule_id_from_namespaced(rule['rule_id']) template_name = _priority_class_to_template_name(rule['priority_class']) if template_name in ['override', 'underride']: templaterule = {k: rule[k] for k in ["conditions", "actions"]} elif template_name in ["sender", "room"]: templaterule = {'actions': rule['actions']} unscoped_rule_id = rule['conditions'][0]['pattern'] elif template_name == 'content': if len(rule["conditions"]) != 1: return None thecond = rule["conditions"][0] if "pattern" not in thecond: return None templaterule = {'actions': rule['actions']} templaterule["pattern"] = thecond["pattern"] if unscoped_rule_id: templaterule['rule_id'] = unscoped_rule_id if 'default' in rule: templaterule['default'] = rule['default'] return templaterule def _rule_id_from_namespaced(in_rule_id): return in_rule_id.split('/')[-1] def _priority_class_to_template_name(pc): return PRIORITY_CLASS_INVERSE_MAP[pc] synapse-0.24.0/synapse/push/emailpusher.py000066400000000000000000000254371317335640100206260ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer, reactor from twisted.internet.error import AlreadyCalled, AlreadyCancelled import logging from synapse.util.metrics import Measure from synapse.util.logcontext import LoggingContext logger = logging.getLogger(__name__) # The amount of time we always wait before ever emailing about a notification # (to give the user a chance to respond to other push or notice the window) DELAY_BEFORE_MAIL_MS = 10 * 60 * 1000 # THROTTLE is the minimum time between mail notifications sent for a given room. # Each room maintains its own throttle counter, but each new mail notification # sends the pending notifications for all rooms. THROTTLE_START_MS = 10 * 60 * 1000 THROTTLE_MAX_MS = 24 * 60 * 60 * 1000 # 24h # THROTTLE_MULTIPLIER = 6 # 10 mins, 1 hour, 6 hours, 24 hours THROTTLE_MULTIPLIER = 144 # 10 mins, 24 hours - i.e. jump straight to 1 day # If no event triggers a notification for this long after the previous, # the throttle is released. # 12 hours - a gap of 12 hours in conversation is surely enough to merit a new # notification when things get going again... THROTTLE_RESET_AFTER_MS = (12 * 60 * 60 * 1000) # does each email include all unread notifs, or just the ones which have happened # since the last mail? # XXX: this is currently broken as it includes ones from parted rooms(!) INCLUDE_ALL_UNREAD_NOTIFS = False class EmailPusher(object): """ A pusher that sends email notifications about events (approximately) when they happen. This shares quite a bit of code with httpusher: it would be good to factor out the common parts """ def __init__(self, hs, pusherdict, mailer): self.hs = hs self.mailer = mailer self.store = self.hs.get_datastore() self.clock = self.hs.get_clock() self.pusher_id = pusherdict['id'] self.user_id = pusherdict['user_name'] self.app_id = pusherdict['app_id'] self.email = pusherdict['pushkey'] self.last_stream_ordering = pusherdict['last_stream_ordering'] self.timed_call = None self.throttle_params = None # See httppusher self.max_stream_ordering = None self.processing = False @defer.inlineCallbacks def on_started(self): if self.mailer is not None: self.throttle_params = yield self.store.get_throttle_params_by_room( self.pusher_id ) yield self._process() def on_stop(self): if self.timed_call: try: self.timed_call.cancel() except (AlreadyCalled, AlreadyCancelled): pass self.timed_call = None @defer.inlineCallbacks def on_new_notifications(self, min_stream_ordering, max_stream_ordering): self.max_stream_ordering = max(max_stream_ordering, self.max_stream_ordering) yield self._process() def on_new_receipts(self, min_stream_id, max_stream_id): # We could wake up and cancel the timer but there tend to be quite a # lot of read receipts so it's probably less work to just let the # timer fire return defer.succeed(None) @defer.inlineCallbacks def on_timer(self): self.timed_call = None yield self._process() @defer.inlineCallbacks def _process(self): if self.processing: return with LoggingContext("emailpush._process"): with Measure(self.clock, "emailpush._process"): try: self.processing = True # if the max ordering changes while we're running _unsafe_process, # call it again, and so on until we've caught up. while True: starting_max_ordering = self.max_stream_ordering try: yield self._unsafe_process() except: logger.exception("Exception processing notifs") if self.max_stream_ordering == starting_max_ordering: break finally: self.processing = False @defer.inlineCallbacks def _unsafe_process(self): """ Main logic of the push loop without the wrapper function that sets up logging, measures and guards against multiple instances of it being run. """ start = 0 if INCLUDE_ALL_UNREAD_NOTIFS else self.last_stream_ordering fn = self.store.get_unread_push_actions_for_user_in_range_for_email unprocessed = yield fn(self.user_id, start, self.max_stream_ordering) soonest_due_at = None if not unprocessed: yield self.save_last_stream_ordering_and_success(self.max_stream_ordering) return for push_action in unprocessed: received_at = push_action['received_ts'] if received_at is None: received_at = 0 notif_ready_at = received_at + DELAY_BEFORE_MAIL_MS room_ready_at = self.room_ready_to_notify_at( push_action['room_id'] ) should_notify_at = max(notif_ready_at, room_ready_at) if should_notify_at < self.clock.time_msec(): # one of our notifications is ready for sending, so we send # *one* email updating the user on their notifications, # we then consider all previously outstanding notifications # to be delivered. reason = { 'room_id': push_action['room_id'], 'now': self.clock.time_msec(), 'received_at': received_at, 'delay_before_mail_ms': DELAY_BEFORE_MAIL_MS, 'last_sent_ts': self.get_room_last_sent_ts(push_action['room_id']), 'throttle_ms': self.get_room_throttle_ms(push_action['room_id']), } yield self.send_notification(unprocessed, reason) yield self.save_last_stream_ordering_and_success(max([ ea['stream_ordering'] for ea in unprocessed ])) # we update the throttle on all the possible unprocessed push actions for ea in unprocessed: yield self.sent_notif_update_throttle( ea['room_id'], ea ) break else: if soonest_due_at is None or should_notify_at < soonest_due_at: soonest_due_at = should_notify_at if self.timed_call is not None: try: self.timed_call.cancel() except (AlreadyCalled, AlreadyCancelled): pass self.timed_call = None if soonest_due_at is not None: self.timed_call = reactor.callLater( self.seconds_until(soonest_due_at), self.on_timer ) @defer.inlineCallbacks def save_last_stream_ordering_and_success(self, last_stream_ordering): self.last_stream_ordering = last_stream_ordering yield self.store.update_pusher_last_stream_ordering_and_success( self.app_id, self.email, self.user_id, last_stream_ordering, self.clock.time_msec() ) def seconds_until(self, ts_msec): secs = (ts_msec - self.clock.time_msec()) / 1000 return max(secs, 0) def get_room_throttle_ms(self, room_id): if room_id in self.throttle_params: return self.throttle_params[room_id]["throttle_ms"] else: return 0 def get_room_last_sent_ts(self, room_id): if room_id in self.throttle_params: return self.throttle_params[room_id]["last_sent_ts"] else: return 0 def room_ready_to_notify_at(self, room_id): """ Determines whether throttling should prevent us from sending an email for the given room Returns: The timestamp when we are next allowed to send an email notif for this room """ last_sent_ts = self.get_room_last_sent_ts(room_id) throttle_ms = self.get_room_throttle_ms(room_id) may_send_at = last_sent_ts + throttle_ms return may_send_at @defer.inlineCallbacks def sent_notif_update_throttle(self, room_id, notified_push_action): # We have sent a notification, so update the throttle accordingly. # If the event that triggered the notif happened more than # THROTTLE_RESET_AFTER_MS after the previous one that triggered a # notif, we release the throttle. Otherwise, the throttle is increased. time_of_previous_notifs = yield self.store.get_time_of_last_push_action_before( notified_push_action['stream_ordering'] ) time_of_this_notifs = notified_push_action['received_ts'] if time_of_previous_notifs is not None and time_of_this_notifs is not None: gap = time_of_this_notifs - time_of_previous_notifs else: # if we don't know the arrival time of one of the notifs (it was not # stored prior to email notification code) then assume a gap of # zero which will just not reset the throttle gap = 0 current_throttle_ms = self.get_room_throttle_ms(room_id) if gap > THROTTLE_RESET_AFTER_MS: new_throttle_ms = THROTTLE_START_MS else: if current_throttle_ms == 0: new_throttle_ms = THROTTLE_START_MS else: new_throttle_ms = min( current_throttle_ms * THROTTLE_MULTIPLIER, THROTTLE_MAX_MS ) self.throttle_params[room_id] = { "last_sent_ts": self.clock.time_msec(), "throttle_ms": new_throttle_ms } yield self.store.set_throttle_params( self.pusher_id, room_id, self.throttle_params[room_id] ) @defer.inlineCallbacks def send_notification(self, push_actions, reason): logger.info("Sending notif email for user %r", self.user_id) yield self.mailer.send_notification_mail( self.app_id, self.user_id, self.email, push_actions, reason ) synapse-0.24.0/synapse/push/httppusher.py000066400000000000000000000331771317335640100205160ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.push import PusherConfigException from twisted.internet import defer, reactor from twisted.internet.error import AlreadyCalled, AlreadyCancelled import logging import push_rule_evaluator import push_tools from synapse.util.logcontext import LoggingContext from synapse.util.metrics import Measure logger = logging.getLogger(__name__) class HttpPusher(object): INITIAL_BACKOFF_SEC = 1 # in seconds because that's what Twisted takes MAX_BACKOFF_SEC = 60 * 60 # This one's in ms because we compare it against the clock GIVE_UP_AFTER_MS = 24 * 60 * 60 * 1000 def __init__(self, hs, pusherdict): self.hs = hs self.store = self.hs.get_datastore() self.clock = self.hs.get_clock() self.state_handler = self.hs.get_state_handler() self.user_id = pusherdict['user_name'] self.app_id = pusherdict['app_id'] self.app_display_name = pusherdict['app_display_name'] self.device_display_name = pusherdict['device_display_name'] self.pushkey = pusherdict['pushkey'] self.pushkey_ts = pusherdict['ts'] self.data = pusherdict['data'] self.last_stream_ordering = pusherdict['last_stream_ordering'] self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC self.failing_since = pusherdict['failing_since'] self.timed_call = None self.processing = False # This is the highest stream ordering we know it's safe to process. # When new events arrive, we'll be given a window of new events: we # should honour this rather than just looking for anything higher # because of potential out-of-order event serialisation. This starts # off as None though as we don't know any better. self.max_stream_ordering = None if 'data' not in pusherdict: raise PusherConfigException( "No 'data' key for HTTP pusher" ) self.data = pusherdict['data'] self.name = "%s/%s/%s" % ( pusherdict['user_name'], pusherdict['app_id'], pusherdict['pushkey'], ) if 'url' not in self.data: raise PusherConfigException( "'url' required in data for HTTP pusher" ) self.url = self.data['url'] self.http_client = hs.get_simple_http_client() self.data_minus_url = {} self.data_minus_url.update(self.data) del self.data_minus_url['url'] @defer.inlineCallbacks def on_started(self): yield self._process() @defer.inlineCallbacks def on_new_notifications(self, min_stream_ordering, max_stream_ordering): self.max_stream_ordering = max(max_stream_ordering, self.max_stream_ordering) yield self._process() @defer.inlineCallbacks def on_new_receipts(self, min_stream_id, max_stream_id): # Note that the min here shouldn't be relied upon to be accurate. # We could check the receipts are actually m.read receipts here, # but currently that's the only type of receipt anyway... with LoggingContext("push.on_new_receipts"): with Measure(self.clock, "push.on_new_receipts"): badge = yield push_tools.get_badge_count( self.hs.get_datastore(), self.user_id ) yield self._send_badge(badge) @defer.inlineCallbacks def on_timer(self): yield self._process() def on_stop(self): if self.timed_call: try: self.timed_call.cancel() except (AlreadyCalled, AlreadyCancelled): pass self.timed_call = None @defer.inlineCallbacks def _process(self): if self.processing: return with LoggingContext("push._process"): with Measure(self.clock, "push._process"): try: self.processing = True # if the max ordering changes while we're running _unsafe_process, # call it again, and so on until we've caught up. while True: starting_max_ordering = self.max_stream_ordering try: yield self._unsafe_process() except: logger.exception("Exception processing notifs") if self.max_stream_ordering == starting_max_ordering: break finally: self.processing = False @defer.inlineCallbacks def _unsafe_process(self): """ Looks for unset notifications and dispatch them, in order Never call this directly: use _process which will only allow this to run once per pusher. """ fn = self.store.get_unread_push_actions_for_user_in_range_for_http unprocessed = yield fn( self.user_id, self.last_stream_ordering, self.max_stream_ordering ) for push_action in unprocessed: processed = yield self._process_one(push_action) if processed: self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC self.last_stream_ordering = push_action['stream_ordering'] yield self.store.update_pusher_last_stream_ordering_and_success( self.app_id, self.pushkey, self.user_id, self.last_stream_ordering, self.clock.time_msec() ) if self.failing_since: self.failing_since = None yield self.store.update_pusher_failing_since( self.app_id, self.pushkey, self.user_id, self.failing_since ) else: if not self.failing_since: self.failing_since = self.clock.time_msec() yield self.store.update_pusher_failing_since( self.app_id, self.pushkey, self.user_id, self.failing_since ) if ( self.failing_since and self.failing_since < self.clock.time_msec() - HttpPusher.GIVE_UP_AFTER_MS ): # we really only give up so that if the URL gets # fixed, we don't suddenly deliver a load # of old notifications. logger.warn("Giving up on a notification to user %s, " "pushkey %s", self.user_id, self.pushkey) self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC self.last_stream_ordering = push_action['stream_ordering'] yield self.store.update_pusher_last_stream_ordering( self.app_id, self.pushkey, self.user_id, self.last_stream_ordering ) self.failing_since = None yield self.store.update_pusher_failing_since( self.app_id, self.pushkey, self.user_id, self.failing_since ) else: logger.info("Push failed: delaying for %ds", self.backoff_delay) self.timed_call = reactor.callLater(self.backoff_delay, self.on_timer) self.backoff_delay = min(self.backoff_delay * 2, self.MAX_BACKOFF_SEC) break @defer.inlineCallbacks def _process_one(self, push_action): if 'notify' not in push_action['actions']: defer.returnValue(True) tweaks = push_rule_evaluator.tweaks_for_actions(push_action['actions']) badge = yield push_tools.get_badge_count(self.hs.get_datastore(), self.user_id) event = yield self.store.get_event(push_action['event_id'], allow_none=True) if event is None: defer.returnValue(True) # It's been redacted rejected = yield self.dispatch_push(event, tweaks, badge) if rejected is False: defer.returnValue(False) if isinstance(rejected, list) or isinstance(rejected, tuple): for pk in rejected: if pk != self.pushkey: # for sanity, we only remove the pushkey if it # was the one we actually sent... logger.warn( ("Ignoring rejected pushkey %s because we" " didn't send it"), pk ) else: logger.info( "Pushkey %s was rejected: removing", pk ) yield self.hs.remove_pusher( self.app_id, pk, self.user_id ) defer.returnValue(True) @defer.inlineCallbacks def _build_notification_dict(self, event, tweaks, badge): if self.data.get('format') == 'event_id_only': d = { 'notification': { 'event_id': event.event_id, 'room_id': event.room_id, 'counts': { 'unread': badge, }, 'devices': [ { 'app_id': self.app_id, 'pushkey': self.pushkey, 'pushkey_ts': long(self.pushkey_ts / 1000), 'data': self.data_minus_url, } ] } } defer.returnValue(d) ctx = yield push_tools.get_context_for_event( self.store, self.state_handler, event, self.user_id ) d = { 'notification': { 'id': event.event_id, # deprecated: remove soon 'event_id': event.event_id, 'room_id': event.room_id, 'type': event.type, 'sender': event.user_id, 'counts': { # -- we don't mark messages as read yet so # we have no way of knowing # Just set the badge to 1 until we have read receipts 'unread': badge, # 'missed_calls': 2 }, 'devices': [ { 'app_id': self.app_id, 'pushkey': self.pushkey, 'pushkey_ts': long(self.pushkey_ts / 1000), 'data': self.data_minus_url, 'tweaks': tweaks } ] } } if event.type == 'm.room.member': d['notification']['membership'] = event.content['membership'] d['notification']['user_is_target'] = event.state_key == self.user_id if not self.hs.config.push_redact_content and 'content' in event: d['notification']['content'] = event.content # We no longer send aliases separately, instead, we send the human # readable name of the room, which may be an alias. if 'sender_display_name' in ctx and len(ctx['sender_display_name']) > 0: d['notification']['sender_display_name'] = ctx['sender_display_name'] if 'name' in ctx and len(ctx['name']) > 0: d['notification']['room_name'] = ctx['name'] defer.returnValue(d) @defer.inlineCallbacks def dispatch_push(self, event, tweaks, badge): notification_dict = yield self._build_notification_dict(event, tweaks, badge) if not notification_dict: defer.returnValue([]) try: resp = yield self.http_client.post_json_get_json(self.url, notification_dict) except: logger.warn("Failed to push %s ", self.url) defer.returnValue(False) rejected = [] if 'rejected' in resp: rejected = resp['rejected'] defer.returnValue(rejected) @defer.inlineCallbacks def _send_badge(self, badge): logger.info("Sending updated badge count %d to %r", badge, self.user_id) d = { 'notification': { 'id': '', 'type': None, 'sender': '', 'counts': { 'unread': badge }, 'devices': [ { 'app_id': self.app_id, 'pushkey': self.pushkey, 'pushkey_ts': long(self.pushkey_ts / 1000), 'data': self.data_minus_url, } ] } } try: resp = yield self.http_client.post_json_get_json(self.url, d) except: logger.exception("Failed to push %s ", self.url) defer.returnValue(False) rejected = [] if 'rejected' in resp: rejected = resp['rejected'] defer.returnValue(rejected) synapse-0.24.0/synapse/push/mailer.py000066400000000000000000000510571317335640100175560ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from twisted.mail.smtp import sendmail import email.utils import email.mime.multipart from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart from synapse.util.async import concurrently_execute from synapse.push.presentable_names import ( calculate_room_name, name_from_member_event, descriptor_from_member_events ) from synapse.types import UserID from synapse.api.errors import StoreError from synapse.api.constants import EventTypes from synapse.visibility import filter_events_for_client import jinja2 import bleach import time import urllib import logging logger = logging.getLogger(__name__) MESSAGE_FROM_PERSON_IN_ROOM = "You have a message on %(app)s from %(person)s " \ "in the %(room)s room..." MESSAGE_FROM_PERSON = "You have a message on %(app)s from %(person)s..." MESSAGES_FROM_PERSON = "You have messages on %(app)s from %(person)s..." MESSAGES_IN_ROOM = "You have messages on %(app)s in the %(room)s room..." MESSAGES_IN_ROOM_AND_OTHERS = \ "You have messages on %(app)s in the %(room)s room and others..." MESSAGES_FROM_PERSON_AND_OTHERS = \ "You have messages on %(app)s from %(person)s and others..." INVITE_FROM_PERSON_TO_ROOM = "%(person)s has invited you to join the " \ "%(room)s room on %(app)s..." INVITE_FROM_PERSON = "%(person)s has invited you to chat on %(app)s..." CONTEXT_BEFORE = 1 CONTEXT_AFTER = 1 # From https://github.com/matrix-org/matrix-react-sdk/blob/master/src/HtmlUtils.js ALLOWED_TAGS = [ 'font', # custom to matrix for IRC-style font coloring 'del', # for markdown # deliberately no h1/h2 to stop people shouting. 'h3', 'h4', 'h5', 'h6', 'blockquote', 'p', 'a', 'ul', 'ol', 'nl', 'li', 'b', 'i', 'u', 'strong', 'em', 'strike', 'code', 'hr', 'br', 'div', 'table', 'thead', 'caption', 'tbody', 'tr', 'th', 'td', 'pre' ] ALLOWED_ATTRS = { # custom ones first: "font": ["color"], # custom to matrix "a": ["href", "name", "target"], # remote target: custom to matrix # We don't currently allow img itself by default, but this # would make sense if we did "img": ["src"], } # When bleach release a version with this option, we can specify schemes # ALLOWED_SCHEMES = ["http", "https", "ftp", "mailto"] class Mailer(object): def __init__(self, hs, app_name, notif_template_html, notif_template_text): self.hs = hs self.notif_template_html = notif_template_html self.notif_template_text = notif_template_text self.store = self.hs.get_datastore() self.macaroon_gen = self.hs.get_macaroon_generator() self.state_handler = self.hs.get_state_handler() self.app_name = app_name logger.info("Created Mailer for app_name %s" % app_name) @defer.inlineCallbacks def send_notification_mail(self, app_id, user_id, email_address, push_actions, reason): try: from_string = self.hs.config.email_notif_from % { "app": self.app_name } except TypeError: from_string = self.hs.config.email_notif_from raw_from = email.utils.parseaddr(from_string)[1] raw_to = email.utils.parseaddr(email_address)[1] if raw_to == '': raise RuntimeError("Invalid 'to' address") rooms_in_order = deduped_ordered_list( [pa['room_id'] for pa in push_actions] ) notif_events = yield self.store.get_events( [pa['event_id'] for pa in push_actions] ) notifs_by_room = {} for pa in push_actions: notifs_by_room.setdefault(pa["room_id"], []).append(pa) # collect the current state for all the rooms in which we have # notifications state_by_room = {} try: user_display_name = yield self.store.get_profile_displayname( UserID.from_string(user_id).localpart ) if user_display_name is None: user_display_name = user_id except StoreError: user_display_name = user_id @defer.inlineCallbacks def _fetch_room_state(room_id): room_state = yield self.store.get_current_state_ids(room_id) state_by_room[room_id] = room_state # Run at most 3 of these at once: sync does 10 at a time but email # notifs are much less realtime than sync so we can afford to wait a bit. yield concurrently_execute(_fetch_room_state, rooms_in_order, 3) # actually sort our so-called rooms_in_order list, most recent room first rooms_in_order.sort( key=lambda r: -(notifs_by_room[r][-1]['received_ts'] or 0) ) rooms = [] for r in rooms_in_order: roomvars = yield self.get_room_vars( r, user_id, notifs_by_room[r], notif_events, state_by_room[r] ) rooms.append(roomvars) reason['room_name'] = yield calculate_room_name( self.store, state_by_room[reason['room_id']], user_id, fallback_to_members=True ) summary_text = yield self.make_summary_text( notifs_by_room, state_by_room, notif_events, user_id, reason ) template_vars = { "user_display_name": user_display_name, "unsubscribe_link": self.make_unsubscribe_link( user_id, app_id, email_address ), "summary_text": summary_text, "app_name": self.app_name, "rooms": rooms, "reason": reason, } html_text = self.notif_template_html.render(**template_vars) html_part = MIMEText(html_text, "html", "utf8") plain_text = self.notif_template_text.render(**template_vars) text_part = MIMEText(plain_text, "plain", "utf8") multipart_msg = MIMEMultipart('alternative') multipart_msg['Subject'] = "[%s] %s" % (self.app_name, summary_text) multipart_msg['From'] = from_string multipart_msg['To'] = email_address multipart_msg['Date'] = email.utils.formatdate() multipart_msg['Message-ID'] = email.utils.make_msgid() multipart_msg.attach(text_part) multipart_msg.attach(html_part) logger.info("Sending email push notification to %s" % email_address) # logger.debug(html_text) yield sendmail( self.hs.config.email_smtp_host, raw_from, raw_to, multipart_msg.as_string(), port=self.hs.config.email_smtp_port, requireAuthentication=self.hs.config.email_smtp_user is not None, username=self.hs.config.email_smtp_user, password=self.hs.config.email_smtp_pass, requireTransportSecurity=self.hs.config.require_transport_security ) @defer.inlineCallbacks def get_room_vars(self, room_id, user_id, notifs, notif_events, room_state_ids): my_member_event_id = room_state_ids[("m.room.member", user_id)] my_member_event = yield self.store.get_event(my_member_event_id) is_invite = my_member_event.content["membership"] == "invite" room_name = yield calculate_room_name(self.store, room_state_ids, user_id) room_vars = { "title": room_name, "hash": string_ordinal_total(room_id), # See sender avatar hash "notifs": [], "invite": is_invite, "link": self.make_room_link(room_id), } if not is_invite: for n in notifs: notifvars = yield self.get_notif_vars( n, user_id, notif_events[n['event_id']], room_state_ids ) # merge overlapping notifs together. # relies on the notifs being in chronological order. merge = False if room_vars['notifs'] and 'messages' in room_vars['notifs'][-1]: prev_messages = room_vars['notifs'][-1]['messages'] for message in notifvars['messages']: pm = filter(lambda pm: pm['id'] == message['id'], prev_messages) if pm: if not message["is_historical"]: pm[0]["is_historical"] = False merge = True elif merge: # we're merging, so append any remaining messages # in this notif to the previous one prev_messages.append(message) if not merge: room_vars['notifs'].append(notifvars) defer.returnValue(room_vars) @defer.inlineCallbacks def get_notif_vars(self, notif, user_id, notif_event, room_state_ids): results = yield self.store.get_events_around( notif['room_id'], notif['event_id'], before_limit=CONTEXT_BEFORE, after_limit=CONTEXT_AFTER ) ret = { "link": self.make_notif_link(notif), "ts": notif['received_ts'], "messages": [], } the_events = yield filter_events_for_client( self.store, user_id, results["events_before"] ) the_events.append(notif_event) for event in the_events: messagevars = yield self.get_message_vars(notif, event, room_state_ids) if messagevars is not None: ret['messages'].append(messagevars) defer.returnValue(ret) @defer.inlineCallbacks def get_message_vars(self, notif, event, room_state_ids): if event.type != EventTypes.Message: return sender_state_event_id = room_state_ids[("m.room.member", event.sender)] sender_state_event = yield self.store.get_event(sender_state_event_id) sender_name = name_from_member_event(sender_state_event) sender_avatar_url = sender_state_event.content.get("avatar_url") # 'hash' for deterministically picking default images: use # sender_hash % the number of default images to choose from sender_hash = string_ordinal_total(event.sender) msgtype = event.content.get("msgtype") ret = { "msgtype": msgtype, "is_historical": event.event_id != notif['event_id'], "id": event.event_id, "ts": event.origin_server_ts, "sender_name": sender_name, "sender_avatar_url": sender_avatar_url, "sender_hash": sender_hash, } if msgtype == "m.text": self.add_text_message_vars(ret, event) elif msgtype == "m.image": self.add_image_message_vars(ret, event) if "body" in event.content: ret["body_text_plain"] = event.content["body"] defer.returnValue(ret) def add_text_message_vars(self, messagevars, event): msgformat = event.content.get("format") messagevars["format"] = msgformat formatted_body = event.content.get("formatted_body") body = event.content.get("body") if msgformat == "org.matrix.custom.html" and formatted_body: messagevars["body_text_html"] = safe_markup(formatted_body) elif body: messagevars["body_text_html"] = safe_text(body) return messagevars def add_image_message_vars(self, messagevars, event): messagevars["image_url"] = event.content["url"] return messagevars @defer.inlineCallbacks def make_summary_text(self, notifs_by_room, room_state_ids, notif_events, user_id, reason): if len(notifs_by_room) == 1: # Only one room has new stuff room_id = notifs_by_room.keys()[0] # If the room has some kind of name, use it, but we don't # want the generated-from-names one here otherwise we'll # end up with, "new message from Bob in the Bob room" room_name = yield calculate_room_name( self.store, room_state_ids[room_id], user_id, fallback_to_members=False ) my_member_event_id = room_state_ids[room_id][("m.room.member", user_id)] my_member_event = yield self.store.get_event(my_member_event_id) if my_member_event.content["membership"] == "invite": inviter_member_event_id = room_state_ids[room_id][ ("m.room.member", my_member_event.sender) ] inviter_member_event = yield self.store.get_event( inviter_member_event_id ) inviter_name = name_from_member_event(inviter_member_event) if room_name is None: defer.returnValue(INVITE_FROM_PERSON % { "person": inviter_name, "app": self.app_name }) else: defer.returnValue(INVITE_FROM_PERSON_TO_ROOM % { "person": inviter_name, "room": room_name, "app": self.app_name, }) sender_name = None if len(notifs_by_room[room_id]) == 1: # There is just the one notification, so give some detail event = notif_events[notifs_by_room[room_id][0]["event_id"]] if ("m.room.member", event.sender) in room_state_ids[room_id]: state_event_id = room_state_ids[room_id][ ("m.room.member", event.sender) ] state_event = yield self.store.get_event(state_event_id) sender_name = name_from_member_event(state_event) if sender_name is not None and room_name is not None: defer.returnValue(MESSAGE_FROM_PERSON_IN_ROOM % { "person": sender_name, "room": room_name, "app": self.app_name, }) elif sender_name is not None: defer.returnValue(MESSAGE_FROM_PERSON % { "person": sender_name, "app": self.app_name, }) else: # There's more than one notification for this room, so just # say there are several if room_name is not None: defer.returnValue(MESSAGES_IN_ROOM % { "room": room_name, "app": self.app_name, }) else: # If the room doesn't have a name, say who the messages # are from explicitly to avoid, "messages in the Bob room" sender_ids = list(set([ notif_events[n['event_id']].sender for n in notifs_by_room[room_id] ])) member_events = yield self.store.get_events([ room_state_ids[room_id][("m.room.member", s)] for s in sender_ids ]) defer.returnValue(MESSAGES_FROM_PERSON % { "person": descriptor_from_member_events(member_events.values()), "app": self.app_name, }) else: # Stuff's happened in multiple different rooms # ...but we still refer to the 'reason' room which triggered the mail if reason['room_name'] is not None: defer.returnValue(MESSAGES_IN_ROOM_AND_OTHERS % { "room": reason['room_name'], "app": self.app_name, }) else: # If the reason room doesn't have a name, say who the messages # are from explicitly to avoid, "messages in the Bob room" sender_ids = list(set([ notif_events[n['event_id']].sender for n in notifs_by_room[reason['room_id']] ])) member_events = yield self.store.get_events([ room_state_ids[room_id][("m.room.member", s)] for s in sender_ids ]) defer.returnValue(MESSAGES_FROM_PERSON_AND_OTHERS % { "person": descriptor_from_member_events(member_events.values()), "app": self.app_name, }) def make_room_link(self, room_id): if self.hs.config.email_riot_base_url: base_url = self.hs.config.email_riot_base_url elif self.app_name == "Vector": # need /beta for Universal Links to work on iOS base_url = "https://vector.im/beta/#/room" else: base_url = "https://matrix.to/#" return "%s/%s" % (base_url, room_id) def make_notif_link(self, notif): if self.hs.config.email_riot_base_url: return "%s/#/room/%s/%s" % ( self.hs.config.email_riot_base_url, notif['room_id'], notif['event_id'] ) elif self.app_name == "Vector": # need /beta for Universal Links to work on iOS return "https://vector.im/beta/#/room/%s/%s" % ( notif['room_id'], notif['event_id'] ) else: return "https://matrix.to/#/%s/%s" % ( notif['room_id'], notif['event_id'] ) def make_unsubscribe_link(self, user_id, app_id, email_address): params = { "access_token": self.macaroon_gen.generate_delete_pusher_token(user_id), "app_id": app_id, "pushkey": email_address, } # XXX: make r0 once API is stable return "%s_matrix/client/unstable/pushers/remove?%s" % ( self.hs.config.public_baseurl, urllib.urlencode(params), ) def safe_markup(raw_html): return jinja2.Markup(bleach.linkify(bleach.clean( raw_html, tags=ALLOWED_TAGS, attributes=ALLOWED_ATTRS, # bleach master has this, but it isn't released yet # protocols=ALLOWED_SCHEMES, strip=True ))) def safe_text(raw_text): """ Process text: treat it as HTML but escape any tags (ie. just escape the HTML) then linkify it. """ return jinja2.Markup(bleach.linkify(bleach.clean( raw_text, tags=[], attributes={}, strip=False ))) def deduped_ordered_list(l): seen = set() ret = [] for item in l: if item not in seen: seen.add(item) ret.append(item) return ret def string_ordinal_total(s): tot = 0 for c in s: tot += ord(c) return tot def format_ts_filter(value, format): return time.strftime(format, time.localtime(value / 1000)) def load_jinja2_templates(config): """Load the jinja2 email templates from disk Returns: (notif_template_html, notif_template_text) """ logger.info("loading jinja2") loader = jinja2.FileSystemLoader(config.email_template_dir) env = jinja2.Environment(loader=loader) env.filters["format_ts"] = format_ts_filter env.filters["mxc_to_http"] = _create_mxc_to_http_filter(config) notif_template_html = env.get_template( config.email_notif_template_html ) notif_template_text = env.get_template( config.email_notif_template_text ) return notif_template_html, notif_template_text def _create_mxc_to_http_filter(config): def mxc_to_http_filter(value, width, height, resize_method="crop"): if value[0:6] != "mxc://": return "" serverAndMediaId = value[6:] fragment = None if '#' in serverAndMediaId: (serverAndMediaId, fragment) = serverAndMediaId.split('#', 1) fragment = "#" + fragment params = { "width": width, "height": height, "method": resize_method, } return "%s_matrix/media/v1/thumbnail/%s?%s%s" % ( config.public_baseurl, serverAndMediaId, urllib.urlencode(params), fragment or "", ) return mxc_to_http_filter synapse-0.24.0/synapse/push/presentable_names.py000066400000000000000000000166501317335640100217740ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer import re import logging logger = logging.getLogger(__name__) # intentionally looser than what aliases we allow to be registered since # other HSes may allow aliases that we would not ALIAS_RE = re.compile(r"^#.*:.+$") ALL_ALONE = "Empty Room" @defer.inlineCallbacks def calculate_room_name(store, room_state_ids, user_id, fallback_to_members=True, fallback_to_single_member=True): """ Works out a user-facing name for the given room as per Matrix spec recommendations. Does not yet support internationalisation. Args: room_state: Dictionary of the room's state user_id: The ID of the user to whom the room name is being presented fallback_to_members: If False, return None instead of generating a name based on the room's members if the room has no title or aliases. Returns: (string or None) A human readable name for the room. """ # does it have a name? if ("m.room.name", "") in room_state_ids: m_room_name = yield store.get_event( room_state_ids[("m.room.name", "")], allow_none=True ) if m_room_name and m_room_name.content and m_room_name.content["name"]: defer.returnValue(m_room_name.content["name"]) # does it have a canonical alias? if ("m.room.canonical_alias", "") in room_state_ids: canon_alias = yield store.get_event( room_state_ids[("m.room.canonical_alias", "")], allow_none=True ) if ( canon_alias and canon_alias.content and canon_alias.content["alias"] and _looks_like_an_alias(canon_alias.content["alias"]) ): defer.returnValue(canon_alias.content["alias"]) # at this point we're going to need to search the state by all state keys # for an event type, so rearrange the data structure room_state_bytype_ids = _state_as_two_level_dict(room_state_ids) # right then, any aliases at all? if "m.room.aliases" in room_state_bytype_ids: m_room_aliases = room_state_bytype_ids["m.room.aliases"] for alias_id in m_room_aliases.values(): alias_event = yield store.get_event( alias_id, allow_none=True ) if alias_event and alias_event.content.get("aliases"): the_aliases = alias_event.content["aliases"] if len(the_aliases) > 0 and _looks_like_an_alias(the_aliases[0]): defer.returnValue(the_aliases[0]) if not fallback_to_members: defer.returnValue(None) my_member_event = None if ("m.room.member", user_id) in room_state_ids: my_member_event = yield store.get_event( room_state_ids[("m.room.member", user_id)], allow_none=True ) if ( my_member_event is not None and my_member_event.content['membership'] == "invite" ): if ("m.room.member", my_member_event.sender) in room_state_ids: inviter_member_event = yield store.get_event( room_state_ids[("m.room.member", my_member_event.sender)], allow_none=True, ) if inviter_member_event: if fallback_to_single_member: defer.returnValue( "Invite from %s" % ( name_from_member_event(inviter_member_event), ) ) else: return else: defer.returnValue("Room Invite") # we're going to have to generate a name based on who's in the room, # so find out who is in the room that isn't the user. if "m.room.member" in room_state_bytype_ids: member_events = yield store.get_events( room_state_bytype_ids["m.room.member"].values() ) all_members = [ ev for ev in member_events.values() if ev.content['membership'] == "join" or ev.content['membership'] == "invite" ] # Sort the member events oldest-first so the we name people in the # order the joined (it should at least be deterministic rather than # dictionary iteration order) all_members.sort(key=lambda e: e.origin_server_ts) other_members = [m for m in all_members if m.state_key != user_id] else: other_members = [] all_members = [] if len(other_members) == 0: if len(all_members) == 1: # self-chat, peeked room with 1 participant, # or inbound invite, or outbound 3PID invite. if all_members[0].sender == user_id: if "m.room.third_party_invite" in room_state_bytype_ids: third_party_invites = ( room_state_bytype_ids["m.room.third_party_invite"].values() ) if len(third_party_invites) > 0: # technically third party invite events are not member # events, but they are close enough # FIXME: no they're not - they look nothing like a member; # they have a great big encrypted thing as their name to # prevent leaking the 3PID name... # return "Inviting %s" % ( # descriptor_from_member_events(third_party_invites) # ) defer.returnValue("Inviting email address") else: defer.returnValue(ALL_ALONE) else: defer.returnValue(name_from_member_event(all_members[0])) else: defer.returnValue(ALL_ALONE) elif len(other_members) == 1 and not fallback_to_single_member: return else: defer.returnValue(descriptor_from_member_events(other_members)) def descriptor_from_member_events(member_events): if len(member_events) == 0: return "nobody" elif len(member_events) == 1: return name_from_member_event(member_events[0]) elif len(member_events) == 2: return "%s and %s" % ( name_from_member_event(member_events[0]), name_from_member_event(member_events[1]), ) else: return "%s and %d others" % ( name_from_member_event(member_events[0]), len(member_events) - 1, ) def name_from_member_event(member_event): if ( member_event.content and "displayname" in member_event.content and member_event.content["displayname"] ): return member_event.content["displayname"] return member_event.state_key def _state_as_two_level_dict(state): ret = {} for k, v in state.items(): ret.setdefault(k[0], {})[k[1]] = v return ret def _looks_like_an_alias(string): return ALIAS_RE.match(string) is not None synapse-0.24.0/synapse/push/push_rule_evaluator.py000066400000000000000000000165731317335640100224010ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2017 New Vector Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import re from synapse.types import UserID from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache from synapse.util.caches.lrucache import LruCache logger = logging.getLogger(__name__) GLOB_REGEX = re.compile(r'\\\[(\\\!|)(.*)\\\]') IS_GLOB = re.compile(r'[\?\*\[\]]') INEQUALITY_EXPR = re.compile("^([=<>]*)([0-9]*)$") def _room_member_count(ev, condition, room_member_count): return _test_ineq_condition(condition, room_member_count) def _sender_notification_permission(ev, condition, sender_power_level, power_levels): notif_level_key = condition.get('key') if notif_level_key is None: return False notif_levels = power_levels.get('notifications', {}) room_notif_level = notif_levels.get(notif_level_key, 50) return sender_power_level >= room_notif_level def _test_ineq_condition(condition, number): if 'is' not in condition: return False m = INEQUALITY_EXPR.match(condition['is']) if not m: return False ineq = m.group(1) rhs = m.group(2) if not rhs.isdigit(): return False rhs = int(rhs) if ineq == '' or ineq == '==': return number == rhs elif ineq == '<': return number < rhs elif ineq == '>': return number > rhs elif ineq == '>=': return number >= rhs elif ineq == '<=': return number <= rhs else: return False def tweaks_for_actions(actions): tweaks = {} for a in actions: if not isinstance(a, dict): continue if 'set_tweak' in a and 'value' in a: tweaks[a['set_tweak']] = a['value'] return tweaks class PushRuleEvaluatorForEvent(object): def __init__(self, event, room_member_count, sender_power_level, power_levels): self._event = event self._room_member_count = room_member_count self._sender_power_level = sender_power_level self._power_levels = power_levels # Maps strings of e.g. 'content.body' -> event["content"]["body"] self._value_cache = _flatten_dict(event) def matches(self, condition, user_id, display_name): if condition['kind'] == 'event_match': return self._event_match(condition, user_id) elif condition['kind'] == 'contains_display_name': return self._contains_display_name(display_name) elif condition['kind'] == 'room_member_count': return _room_member_count( self._event, condition, self._room_member_count ) elif condition['kind'] == 'sender_notification_permission': return _sender_notification_permission( self._event, condition, self._sender_power_level, self._power_levels, ) else: return True def _event_match(self, condition, user_id): pattern = condition.get('pattern', None) if not pattern: pattern_type = condition.get('pattern_type', None) if pattern_type == "user_id": pattern = user_id elif pattern_type == "user_localpart": pattern = UserID.from_string(user_id).localpart if not pattern: logger.warn("event_match condition with no pattern") return False # XXX: optimisation: cache our pattern regexps if condition['key'] == 'content.body': body = self._event["content"].get("body", None) if not body: return False return _glob_matches(pattern, body, word_boundary=True) else: haystack = self._get_value(condition['key']) if haystack is None: return False return _glob_matches(pattern, haystack) def _contains_display_name(self, display_name): if not display_name: return False body = self._event["content"].get("body", None) if not body: return False return _glob_matches(display_name, body, word_boundary=True) def _get_value(self, dotted_key): return self._value_cache.get(dotted_key, None) # Caches (glob, word_boundary) -> regex for push. See _glob_matches regex_cache = LruCache(50000 * CACHE_SIZE_FACTOR) register_cache("regex_push_cache", regex_cache) def _glob_matches(glob, value, word_boundary=False): """Tests if value matches glob. Args: glob (string) value (string): String to test against glob. word_boundary (bool): Whether to match against word boundaries or entire string. Defaults to False. Returns: bool """ try: r = regex_cache.get((glob, word_boundary), None) if not r: r = _glob_to_re(glob, word_boundary) regex_cache[(glob, word_boundary)] = r return r.search(value) except re.error: logger.warn("Failed to parse glob to regex: %r", glob) return False def _glob_to_re(glob, word_boundary): """Generates regex for a given glob. Args: glob (string) word_boundary (bool): Whether to match against word boundaries or entire string. Defaults to False. Returns: regex object """ if IS_GLOB.search(glob): r = re.escape(glob) r = r.replace(r'\*', '.*?') r = r.replace(r'\?', '.') # handle [abc], [a-z] and [!a-z] style ranges. r = GLOB_REGEX.sub( lambda x: ( '[%s%s]' % ( x.group(1) and '^' or '', x.group(2).replace(r'\\\-', '-') ) ), r, ) if word_boundary: r = _re_word_boundary(r) return re.compile(r, flags=re.IGNORECASE) else: r = "^" + r + "$" return re.compile(r, flags=re.IGNORECASE) elif word_boundary: r = re.escape(glob) r = _re_word_boundary(r) return re.compile(r, flags=re.IGNORECASE) else: r = "^" + re.escape(glob) + "$" return re.compile(r, flags=re.IGNORECASE) def _re_word_boundary(r): """ Adds word boundary characters to the start and end of an expression to require that the match occur as a whole word, but do so respecting the fact that strings starting or ending with non-word characters will change word boundaries. """ # we can't use \b as it chokes on unicode. however \W seems to be okay # as shorthand for [^0-9A-Za-z_]. return r"(^|\W)%s(\W|$)" % (r,) def _flatten_dict(d, prefix=[], result=None): if result is None: result = {} for key, value in d.items(): if isinstance(value, basestring): result[".".join(prefix + [key])] = value.lower() elif hasattr(value, "items"): _flatten_dict(value, prefix=(prefix + [key]), result=result) return result synapse-0.24.0/synapse/push/push_tools.py000066400000000000000000000044201317335640100204740ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.push.presentable_names import ( calculate_room_name, name_from_member_event ) @defer.inlineCallbacks def get_badge_count(store, user_id): invites = yield store.get_invited_rooms_for_user(user_id) joins = yield store.get_rooms_for_user(user_id) my_receipts_by_room = yield store.get_receipts_for_user( user_id, "m.read", ) badge = len(invites) for room_id in joins: if room_id in my_receipts_by_room: last_unread_event_id = my_receipts_by_room[room_id] notifs = yield ( store.get_unread_event_push_actions_by_room_for_user( room_id, user_id, last_unread_event_id ) ) # return one badge count per conversation, as count per # message is so noisy as to be almost useless badge += 1 if notifs["notify_count"] else 0 defer.returnValue(badge) @defer.inlineCallbacks def get_context_for_event(store, state_handler, ev, user_id): ctx = {} room_state_ids = yield store.get_state_ids_for_event(ev.event_id) # we no longer bother setting room_alias, and make room_name the # human-readable name instead, be that m.room.name, an alias or # a list of people in the room name = yield calculate_room_name( store, room_state_ids, user_id, fallback_to_single_member=False ) if name: ctx['name'] = name sender_state_event_id = room_state_ids[("m.room.member", ev.sender)] sender_state_event = yield store.get_event(sender_state_event_id) ctx['sender_display_name'] = name_from_member_event(sender_state_event) defer.returnValue(ctx) synapse-0.24.0/synapse/push/pusher.py000066400000000000000000000053441317335640100176110ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from httppusher import HttpPusher import logging logger = logging.getLogger(__name__) # We try importing this if we can (it will fail if we don't # have the optional email dependencies installed). We don't # yet have the config to know if we need the email pusher, # but importing this after daemonizing seems to fail # (even though a simple test of importing from a daemonized # process works fine) try: from synapse.push.emailpusher import EmailPusher from synapse.push.mailer import Mailer, load_jinja2_templates except: pass class PusherFactory(object): def __init__(self, hs): self.hs = hs self.pusher_types = { "http": HttpPusher, } logger.info("email enable notifs: %r", hs.config.email_enable_notifs) if hs.config.email_enable_notifs: self.mailers = {} # app_name -> Mailer templates = load_jinja2_templates(hs.config) self.notif_template_html, self.notif_template_text = templates self.pusher_types["email"] = self._create_email_pusher logger.info("defined email pusher type") def create_pusher(self, pusherdict): logger.info("trying to create_pusher for %r", pusherdict) if pusherdict['kind'] in self.pusher_types: logger.info("found pusher") return self.pusher_types[pusherdict['kind']](self.hs, pusherdict) def _create_email_pusher(self, _hs, pusherdict): app_name = self._app_name_from_pusherdict(pusherdict) mailer = self.mailers.get(app_name) if not mailer: mailer = Mailer( hs=self.hs, app_name=app_name, notif_template_html=self.notif_template_html, notif_template_text=self.notif_template_text, ) self.mailers[app_name] = mailer return EmailPusher(self.hs, pusherdict, mailer) def _app_name_from_pusherdict(self, pusherdict): if 'data' in pusherdict and 'brand' in pusherdict['data']: app_name = pusherdict['data']['brand'] else: app_name = self.hs.config.email_app_name return app_name synapse-0.24.0/synapse/push/pusherpool.py000066400000000000000000000175621317335640100205100ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from .pusher import PusherFactory from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred from synapse.util.async import run_on_reactor import logging logger = logging.getLogger(__name__) class PusherPool: def __init__(self, _hs): self.hs = _hs self.pusher_factory = PusherFactory(_hs) self.start_pushers = _hs.config.start_pushers self.store = self.hs.get_datastore() self.clock = self.hs.get_clock() self.pushers = {} @defer.inlineCallbacks def start(self): pushers = yield self.store.get_all_pushers() self._start_pushers(pushers) @defer.inlineCallbacks def add_pusher(self, user_id, access_token, kind, app_id, app_display_name, device_display_name, pushkey, lang, data, profile_tag=""): time_now_msec = self.clock.time_msec() # we try to create the pusher just to validate the config: it # will then get pulled out of the database, # recreated, added and started: this means we have only one # code path adding pushers. self.pusher_factory.create_pusher({ "id": None, "user_name": user_id, "kind": kind, "app_id": app_id, "app_display_name": app_display_name, "device_display_name": device_display_name, "pushkey": pushkey, "ts": time_now_msec, "lang": lang, "data": data, "last_stream_ordering": None, "last_success": None, "failing_since": None }) # create the pusher setting last_stream_ordering to the current maximum # stream ordering in event_push_actions, so it will process # pushes from this point onwards. last_stream_ordering = ( yield self.store.get_latest_push_action_stream_ordering() ) yield self.store.add_pusher( user_id=user_id, access_token=access_token, kind=kind, app_id=app_id, app_display_name=app_display_name, device_display_name=device_display_name, pushkey=pushkey, pushkey_ts=time_now_msec, lang=lang, data=data, last_stream_ordering=last_stream_ordering, profile_tag=profile_tag, ) yield self._refresh_pusher(app_id, pushkey, user_id) @defer.inlineCallbacks def remove_pushers_by_app_id_and_pushkey_not_user(self, app_id, pushkey, not_user_id): to_remove = yield self.store.get_pushers_by_app_id_and_pushkey( app_id, pushkey ) for p in to_remove: if p['user_name'] != not_user_id: logger.info( "Removing pusher for app id %s, pushkey %s, user %s", app_id, pushkey, p['user_name'] ) yield self.remove_pusher(p['app_id'], p['pushkey'], p['user_name']) @defer.inlineCallbacks def remove_pushers_by_user(self, user_id, except_access_token_id=None): all = yield self.store.get_all_pushers() logger.info( "Removing all pushers for user %s except access tokens id %r", user_id, except_access_token_id ) for p in all: if p['user_name'] == user_id and p['access_token'] != except_access_token_id: logger.info( "Removing pusher for app id %s, pushkey %s, user %s", p['app_id'], p['pushkey'], p['user_name'] ) yield self.remove_pusher(p['app_id'], p['pushkey'], p['user_name']) @defer.inlineCallbacks def on_new_notifications(self, min_stream_id, max_stream_id): yield run_on_reactor() try: users_affected = yield self.store.get_push_action_users_in_range( min_stream_id, max_stream_id ) deferreds = [] for u in users_affected: if u in self.pushers: for p in self.pushers[u].values(): deferreds.append( preserve_fn(p.on_new_notifications)( min_stream_id, max_stream_id ) ) yield preserve_context_over_deferred(defer.gatherResults(deferreds)) except: logger.exception("Exception in pusher on_new_notifications") @defer.inlineCallbacks def on_new_receipts(self, min_stream_id, max_stream_id, affected_room_ids): yield run_on_reactor() try: # Need to subtract 1 from the minimum because the lower bound here # is not inclusive updated_receipts = yield self.store.get_all_updated_receipts( min_stream_id - 1, max_stream_id ) # This returns a tuple, user_id is at index 3 users_affected = set([r[3] for r in updated_receipts]) deferreds = [] for u in users_affected: if u in self.pushers: for p in self.pushers[u].values(): deferreds.append( preserve_fn(p.on_new_receipts)(min_stream_id, max_stream_id) ) yield preserve_context_over_deferred(defer.gatherResults(deferreds)) except: logger.exception("Exception in pusher on_new_receipts") @defer.inlineCallbacks def _refresh_pusher(self, app_id, pushkey, user_id): resultlist = yield self.store.get_pushers_by_app_id_and_pushkey( app_id, pushkey ) p = None for r in resultlist: if r['user_name'] == user_id: p = r if p: self._start_pushers([p]) def _start_pushers(self, pushers): if not self.start_pushers: logger.info("Not starting pushers because they are disabled in the config") return logger.info("Starting %d pushers", len(pushers)) for pusherdict in pushers: try: p = self.pusher_factory.create_pusher(pusherdict) except: logger.exception("Couldn't start a pusher: caught Exception") continue if p: appid_pushkey = "%s:%s" % ( pusherdict['app_id'], pusherdict['pushkey'], ) byuser = self.pushers.setdefault(pusherdict['user_name'], {}) if appid_pushkey in byuser: byuser[appid_pushkey].on_stop() byuser[appid_pushkey] = p preserve_fn(p.on_started)() logger.info("Started pushers") @defer.inlineCallbacks def remove_pusher(self, app_id, pushkey, user_id): appid_pushkey = "%s:%s" % (app_id, pushkey) byuser = self.pushers.get(user_id, {}) if appid_pushkey in byuser: logger.info("Stopping pusher %s / %s", user_id, appid_pushkey) byuser[appid_pushkey].on_stop() del byuser[appid_pushkey] yield self.store.delete_pusher_by_app_id_pushkey_user_id( app_id, pushkey, user_id ) synapse-0.24.0/synapse/push/rulekinds.py000066400000000000000000000014061317335640100202760ustar00rootroot00000000000000# Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. PRIORITY_CLASS_MAP = { 'underride': 1, 'sender': 2, 'room': 3, 'content': 4, 'override': 5, } PRIORITY_CLASS_INVERSE_MAP = {v: k for k, v in PRIORITY_CLASS_MAP.items()} synapse-0.24.0/synapse/python_dependencies.py000066400000000000000000000137221317335640100213520ustar00rootroot00000000000000# Copyright 2015, 2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from distutils.version import LooseVersion logger = logging.getLogger(__name__) REQUIREMENTS = { "jsonschema>=2.5.1": ["jsonschema>=2.5.1"], "frozendict>=0.4": ["frozendict"], "unpaddedbase64>=1.1.0": ["unpaddedbase64>=1.1.0"], "canonicaljson>=1.0.0": ["canonicaljson>=1.0.0"], "signedjson>=1.0.0": ["signedjson>=1.0.0"], "pynacl==0.3.0": ["nacl==0.3.0", "nacl.bindings"], "service_identity>=1.0.0": ["service_identity>=1.0.0"], "Twisted>=16.0.0": ["twisted>=16.0.0"], "pyopenssl>=0.14": ["OpenSSL>=0.14"], "pyyaml": ["yaml"], "pyasn1": ["pyasn1"], "daemonize": ["daemonize"], "bcrypt": ["bcrypt"], "pillow": ["PIL"], "pydenticon": ["pydenticon"], "ujson": ["ujson"], "blist": ["blist"], "pysaml2>=3.0.0,<4.0.0": ["saml2>=3.0.0,<4.0.0"], "pymacaroons-pynacl": ["pymacaroons"], "msgpack-python>=0.3.0": ["msgpack"], "phonenumbers>=8.2.0": ["phonenumbers"], } CONDITIONAL_REQUIREMENTS = { "web_client": { "matrix_angular_sdk>=0.6.8": ["syweb>=0.6.8"], }, "preview_url": { "netaddr>=0.7.18": ["netaddr"], }, "email.enable_notifs": { "Jinja2>=2.8": ["Jinja2>=2.8"], "bleach>=1.4.2": ["bleach>=1.4.2"], }, "matrix-synapse-ldap3": { "matrix-synapse-ldap3>=0.1": ["ldap_auth_provider"], }, "psutil": { "psutil>=2.0.0": ["psutil>=2.0.0"], }, "affinity": { "affinity": ["affinity"], }, } def requirements(config=None, include_conditional=False): reqs = REQUIREMENTS.copy() if include_conditional: for _, req in CONDITIONAL_REQUIREMENTS.items(): reqs.update(req) return reqs def github_link(project, version, egg): return "https://github.com/%s/tarball/%s/#egg=%s" % (project, version, egg) DEPENDENCY_LINKS = { } class MissingRequirementError(Exception): def __init__(self, message, module_name, dependency): super(MissingRequirementError, self).__init__(message) self.module_name = module_name self.dependency = dependency def check_requirements(config=None): """Checks that all the modules needed by synapse have been correctly installed and are at the correct version""" for dependency, module_requirements in ( requirements(config, include_conditional=False).items()): for module_requirement in module_requirements: if ">=" in module_requirement: module_name, required_version = module_requirement.split(">=") version_test = ">=" elif "==" in module_requirement: module_name, required_version = module_requirement.split("==") version_test = "==" else: module_name = module_requirement version_test = None try: module = __import__(module_name) except ImportError: logging.exception( "Can't import %r which is part of %r", module_name, dependency ) raise MissingRequirementError( "Can't import %r which is part of %r" % (module_name, dependency), module_name, dependency ) version = getattr(module, "__version__", None) file_path = getattr(module, "__file__", None) logger.info( "Using %r version %r from %r to satisfy %r", module_name, version, file_path, dependency ) if version_test == ">=": if version is None: raise MissingRequirementError( "Version of %r isn't set as __version__ of module %r" % (dependency, module_name), module_name, dependency ) if LooseVersion(version) < LooseVersion(required_version): raise MissingRequirementError( "Version of %r in %r is too old. %r < %r" % (dependency, file_path, version, required_version), module_name, dependency ) elif version_test == "==": if version is None: raise MissingRequirementError( "Version of %r isn't set as __version__ of module %r" % (dependency, module_name), module_name, dependency ) if LooseVersion(version) != LooseVersion(required_version): raise MissingRequirementError( "Unexpected version of %r in %r. %r != %r" % (dependency, file_path, version, required_version), module_name, dependency ) def list_requirements(): result = [] linked = [] for link in DEPENDENCY_LINKS.values(): egg = link.split("#egg=")[1] linked.append(egg.split('-')[0]) result.append(link) for requirement in requirements(include_conditional=True): is_linked = False for link in linked: if requirement.replace('-', '_').startswith(link): is_linked = True if not is_linked: result.append(requirement) return result if __name__ == "__main__": import sys sys.stdout.writelines(req + "\n" for req in list_requirements()) synapse-0.24.0/synapse/replication/000077500000000000000000000000001317335640100172555ustar00rootroot00000000000000synapse-0.24.0/synapse/replication/__init__.py000066400000000000000000000011321317335640100213630ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. synapse-0.24.0/synapse/replication/slave/000077500000000000000000000000001317335640100203675ustar00rootroot00000000000000synapse-0.24.0/synapse/replication/slave/__init__.py000066400000000000000000000011321317335640100224750ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. synapse-0.24.0/synapse/replication/slave/storage/000077500000000000000000000000001317335640100220335ustar00rootroot00000000000000synapse-0.24.0/synapse/replication/slave/storage/__init__.py000066400000000000000000000011321317335640100241410ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. synapse-0.24.0/synapse/replication/slave/storage/_base.py000066400000000000000000000041421317335640100234570ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.storage._base import SQLBaseStore from synapse.storage.engines import PostgresEngine from ._slaved_id_tracker import SlavedIdTracker import logging logger = logging.getLogger(__name__) class BaseSlavedStore(SQLBaseStore): def __init__(self, db_conn, hs): super(BaseSlavedStore, self).__init__(hs) if isinstance(self.database_engine, PostgresEngine): self._cache_id_gen = SlavedIdTracker( db_conn, "cache_invalidation_stream", "stream_id", ) else: self._cache_id_gen = None self.hs = hs def stream_positions(self): pos = {} if self._cache_id_gen: pos["caches"] = self._cache_id_gen.get_current_token() return pos def process_replication_rows(self, stream_name, token, rows): if stream_name == "caches": self._cache_id_gen.advance(token) for row in rows: try: getattr(self, row.cache_func).invalidate(tuple(row.keys)) except AttributeError: # We probably haven't pulled in the cache in this worker, # which is fine. pass def _invalidate_cache_and_stream(self, txn, cache_func, keys): txn.call_after(cache_func.invalidate, keys) txn.call_after(self._send_invalidation_poke, cache_func, keys) def _send_invalidation_poke(self, cache_func, keys): self.hs.get_tcp_replication().send_invalidate_cache(cache_func, keys) synapse-0.24.0/synapse/replication/slave/storage/_slaved_id_tracker.py000066400000000000000000000022731317335640100262150ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.storage.util.id_generators import _load_current_id class SlavedIdTracker(object): def __init__(self, db_conn, table, column, extra_tables=[], step=1): self.step = step self._current = _load_current_id(db_conn, table, column, step) for table, column in extra_tables: self.advance(_load_current_id(db_conn, table, column)) def advance(self, new_id): self._current = (max if self.step > 0 else min)(self._current, new_id) def get_current_token(self): """ Returns: int """ return self._current synapse-0.24.0/synapse/replication/slave/storage/account_data.py000066400000000000000000000070111317335640100250310ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from ._slaved_id_tracker import SlavedIdTracker from synapse.storage import DataStore from synapse.storage.account_data import AccountDataStore from synapse.storage.tags import TagsStore from synapse.util.caches.stream_change_cache import StreamChangeCache class SlavedAccountDataStore(BaseSlavedStore): def __init__(self, db_conn, hs): super(SlavedAccountDataStore, self).__init__(db_conn, hs) self._account_data_id_gen = SlavedIdTracker( db_conn, "account_data_max_stream_id", "stream_id", ) self._account_data_stream_cache = StreamChangeCache( "AccountDataAndTagsChangeCache", self._account_data_id_gen.get_current_token(), ) get_account_data_for_user = ( AccountDataStore.__dict__["get_account_data_for_user"] ) get_global_account_data_by_type_for_users = ( AccountDataStore.__dict__["get_global_account_data_by_type_for_users"] ) get_global_account_data_by_type_for_user = ( AccountDataStore.__dict__["get_global_account_data_by_type_for_user"] ) get_tags_for_user = TagsStore.__dict__["get_tags_for_user"] get_tags_for_room = ( DataStore.get_tags_for_room.__func__ ) get_account_data_for_room = ( DataStore.get_account_data_for_room.__func__ ) get_updated_tags = DataStore.get_updated_tags.__func__ get_updated_account_data_for_user = ( DataStore.get_updated_account_data_for_user.__func__ ) def get_max_account_data_stream_id(self): return self._account_data_id_gen.get_current_token() def stream_positions(self): result = super(SlavedAccountDataStore, self).stream_positions() position = self._account_data_id_gen.get_current_token() result["user_account_data"] = position result["room_account_data"] = position result["tag_account_data"] = position return result def process_replication_rows(self, stream_name, token, rows): if stream_name == "tag_account_data": self._account_data_id_gen.advance(token) for row in rows: self.get_tags_for_user.invalidate((row.user_id,)) self._account_data_stream_cache.entity_has_changed( row.user_id, token ) elif stream_name == "account_data": self._account_data_id_gen.advance(token) for row in rows: if not row.room_id: self.get_global_account_data_by_type_for_user.invalidate( (row.data_type, row.user_id,) ) self.get_account_data_for_user.invalidate((row.user_id,)) self._account_data_stream_cache.entity_has_changed( row.user_id, token ) return super(SlavedAccountDataStore, self).process_replication_rows( stream_name, token, rows ) synapse-0.24.0/synapse/replication/slave/storage/appservice.py000066400000000000000000000041431317335640100245500ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from synapse.storage import DataStore from synapse.config.appservice import load_appservices from synapse.storage.appservice import _make_exclusive_regex class SlavedApplicationServiceStore(BaseSlavedStore): def __init__(self, db_conn, hs): super(SlavedApplicationServiceStore, self).__init__(db_conn, hs) self.services_cache = load_appservices( hs.config.server_name, hs.config.app_service_config_files ) self.exclusive_user_regex = _make_exclusive_regex(self.services_cache) get_app_service_by_token = DataStore.get_app_service_by_token.__func__ get_app_service_by_user_id = DataStore.get_app_service_by_user_id.__func__ get_app_services = DataStore.get_app_services.__func__ get_new_events_for_appservice = DataStore.get_new_events_for_appservice.__func__ create_appservice_txn = DataStore.create_appservice_txn.__func__ get_appservices_by_state = DataStore.get_appservices_by_state.__func__ get_oldest_unsent_txn = DataStore.get_oldest_unsent_txn.__func__ _get_last_txn = DataStore._get_last_txn.__func__ complete_appservice_txn = DataStore.complete_appservice_txn.__func__ get_appservice_state = DataStore.get_appservice_state.__func__ set_appservice_last_pos = DataStore.set_appservice_last_pos.__func__ set_appservice_state = DataStore.set_appservice_state.__func__ get_if_app_services_interested_in_user = ( DataStore.get_if_app_services_interested_in_user.__func__ ) synapse-0.24.0/synapse/replication/slave/storage/client_ips.py000066400000000000000000000032071317335640100245400ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from synapse.storage.client_ips import LAST_SEEN_GRANULARITY from synapse.util.caches import CACHE_SIZE_FACTOR from synapse.util.caches.descriptors import Cache class SlavedClientIpStore(BaseSlavedStore): def __init__(self, db_conn, hs): super(SlavedClientIpStore, self).__init__(db_conn, hs) self.client_ip_last_seen = Cache( name="client_ip_last_seen", keylen=4, max_entries=50000 * CACHE_SIZE_FACTOR, ) def insert_client_ip(self, user_id, access_token, ip, user_agent, device_id): now = int(self._clock.time_msec()) key = (user_id, access_token, ip) try: last_seen = self.client_ip_last_seen.get(key) except KeyError: last_seen = None # Rate-limited inserts if last_seen is not None and (now - last_seen) < LAST_SEEN_GRANULARITY: return self.hs.get_tcp_replication().send_user_ip( user_id, access_token, ip, user_agent, device_id, now ) synapse-0.24.0/synapse/replication/slave/storage/deviceinbox.py000066400000000000000000000056561317335640100247200ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from ._slaved_id_tracker import SlavedIdTracker from synapse.storage import DataStore from synapse.util.caches.stream_change_cache import StreamChangeCache from synapse.util.caches.expiringcache import ExpiringCache class SlavedDeviceInboxStore(BaseSlavedStore): def __init__(self, db_conn, hs): super(SlavedDeviceInboxStore, self).__init__(db_conn, hs) self._device_inbox_id_gen = SlavedIdTracker( db_conn, "device_max_stream_id", "stream_id", ) self._device_inbox_stream_cache = StreamChangeCache( "DeviceInboxStreamChangeCache", self._device_inbox_id_gen.get_current_token() ) self._device_federation_outbox_stream_cache = StreamChangeCache( "DeviceFederationOutboxStreamChangeCache", self._device_inbox_id_gen.get_current_token() ) self._last_device_delete_cache = ExpiringCache( cache_name="last_device_delete_cache", clock=self._clock, max_len=10000, expiry_ms=30 * 60 * 1000, ) get_to_device_stream_token = DataStore.get_to_device_stream_token.__func__ get_new_messages_for_device = DataStore.get_new_messages_for_device.__func__ get_new_device_msgs_for_remote = DataStore.get_new_device_msgs_for_remote.__func__ delete_messages_for_device = DataStore.delete_messages_for_device.__func__ delete_device_msgs_for_remote = DataStore.delete_device_msgs_for_remote.__func__ def stream_positions(self): result = super(SlavedDeviceInboxStore, self).stream_positions() result["to_device"] = self._device_inbox_id_gen.get_current_token() return result def process_replication_rows(self, stream_name, token, rows): if stream_name == "to_device": self._device_inbox_id_gen.advance(token) for row in rows: if row.entity.startswith("@"): self._device_inbox_stream_cache.entity_has_changed( row.entity, token ) else: self._device_federation_outbox_stream_cache.entity_has_changed( row.entity, token ) return super(SlavedDeviceInboxStore, self).process_replication_rows( stream_name, token, rows ) synapse-0.24.0/synapse/replication/slave/storage/devices.py000066400000000000000000000056431317335640100240370ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from ._slaved_id_tracker import SlavedIdTracker from synapse.storage import DataStore from synapse.storage.end_to_end_keys import EndToEndKeyStore from synapse.util.caches.stream_change_cache import StreamChangeCache class SlavedDeviceStore(BaseSlavedStore): def __init__(self, db_conn, hs): super(SlavedDeviceStore, self).__init__(db_conn, hs) self.hs = hs self._device_list_id_gen = SlavedIdTracker( db_conn, "device_lists_stream", "stream_id", ) device_list_max = self._device_list_id_gen.get_current_token() self._device_list_stream_cache = StreamChangeCache( "DeviceListStreamChangeCache", device_list_max, ) self._device_list_federation_stream_cache = StreamChangeCache( "DeviceListFederationStreamChangeCache", device_list_max, ) get_device_stream_token = DataStore.get_device_stream_token.__func__ get_user_whose_devices_changed = DataStore.get_user_whose_devices_changed.__func__ get_devices_by_remote = DataStore.get_devices_by_remote.__func__ _get_devices_by_remote_txn = DataStore._get_devices_by_remote_txn.__func__ _get_e2e_device_keys_txn = DataStore._get_e2e_device_keys_txn.__func__ mark_as_sent_devices_by_remote = DataStore.mark_as_sent_devices_by_remote.__func__ _mark_as_sent_devices_by_remote_txn = ( DataStore._mark_as_sent_devices_by_remote_txn.__func__ ) count_e2e_one_time_keys = EndToEndKeyStore.__dict__["count_e2e_one_time_keys"] def stream_positions(self): result = super(SlavedDeviceStore, self).stream_positions() result["device_lists"] = self._device_list_id_gen.get_current_token() return result def process_replication_rows(self, stream_name, token, rows): if stream_name == "device_lists": self._device_list_id_gen.advance(token) for row in rows: self._device_list_stream_cache.entity_has_changed( row.user_id, token ) if row.destination: self._device_list_federation_stream_cache.entity_has_changed( row.destination, token ) return super(SlavedDeviceStore, self).process_replication_rows( stream_name, token, rows ) synapse-0.24.0/synapse/replication/slave/storage/directory.py000066400000000000000000000014731317335640100244160ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from synapse.storage.directory import DirectoryStore class DirectoryStore(BaseSlavedStore): get_aliases_for_room = DirectoryStore.__dict__[ "get_aliases_for_room" ] synapse-0.24.0/synapse/replication/slave/storage/events.py000066400000000000000000000244141317335640100237160ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from ._slaved_id_tracker import SlavedIdTracker from synapse.api.constants import EventTypes from synapse.storage import DataStore from synapse.storage.roommember import RoomMemberStore from synapse.storage.event_federation import EventFederationStore from synapse.storage.event_push_actions import EventPushActionsStore from synapse.storage.state import StateStore from synapse.storage.stream import StreamStore from synapse.util.caches.stream_change_cache import StreamChangeCache import logging logger = logging.getLogger(__name__) # So, um, we want to borrow a load of functions intended for reading from # a DataStore, but we don't want to take functions that either write to the # DataStore or are cached and don't have cache invalidation logic. # # Rather than write duplicate versions of those functions, or lift them to # a common base class, we going to grab the underlying __func__ object from # the method descriptor on the DataStore and chuck them into our class. class SlavedEventStore(BaseSlavedStore): def __init__(self, db_conn, hs): super(SlavedEventStore, self).__init__(db_conn, hs) self._stream_id_gen = SlavedIdTracker( db_conn, "events", "stream_ordering", ) self._backfill_id_gen = SlavedIdTracker( db_conn, "events", "stream_ordering", step=-1 ) events_max = self._stream_id_gen.get_current_token() event_cache_prefill, min_event_val = self._get_cache_dict( db_conn, "events", entity_column="room_id", stream_column="stream_ordering", max_value=events_max, ) self._events_stream_cache = StreamChangeCache( "EventsRoomStreamChangeCache", min_event_val, prefilled_cache=event_cache_prefill, ) self._membership_stream_cache = StreamChangeCache( "MembershipStreamChangeCache", events_max, ) self.stream_ordering_month_ago = 0 self._stream_order_on_start = self.get_room_max_stream_ordering() # Cached functions can't be accessed through a class instance so we need # to reach inside the __dict__ to extract them. get_rooms_for_user = RoomMemberStore.__dict__["get_rooms_for_user"] get_users_in_room = RoomMemberStore.__dict__["get_users_in_room"] get_hosts_in_room = RoomMemberStore.__dict__["get_hosts_in_room"] get_users_who_share_room_with_user = ( RoomMemberStore.__dict__["get_users_who_share_room_with_user"] ) get_latest_event_ids_in_room = EventFederationStore.__dict__[ "get_latest_event_ids_in_room" ] get_invited_rooms_for_user = RoomMemberStore.__dict__[ "get_invited_rooms_for_user" ] get_unread_event_push_actions_by_room_for_user = ( EventPushActionsStore.__dict__["get_unread_event_push_actions_by_room_for_user"] ) _get_unread_counts_by_receipt_txn = ( DataStore._get_unread_counts_by_receipt_txn.__func__ ) _get_unread_counts_by_pos_txn = ( DataStore._get_unread_counts_by_pos_txn.__func__ ) _get_state_group_for_events = ( StateStore.__dict__["_get_state_group_for_events"] ) _get_state_group_for_event = ( StateStore.__dict__["_get_state_group_for_event"] ) _get_state_groups_from_groups = ( StateStore.__dict__["_get_state_groups_from_groups"] ) _get_state_groups_from_groups_txn = ( DataStore._get_state_groups_from_groups_txn.__func__ ) get_recent_event_ids_for_room = ( StreamStore.__dict__["get_recent_event_ids_for_room"] ) get_current_state_ids = ( StateStore.__dict__["get_current_state_ids"] ) get_state_group_delta = StateStore.__dict__["get_state_group_delta"] _get_joined_hosts_cache = RoomMemberStore.__dict__["_get_joined_hosts_cache"] has_room_changed_since = DataStore.has_room_changed_since.__func__ get_unread_push_actions_for_user_in_range_for_http = ( DataStore.get_unread_push_actions_for_user_in_range_for_http.__func__ ) get_unread_push_actions_for_user_in_range_for_email = ( DataStore.get_unread_push_actions_for_user_in_range_for_email.__func__ ) get_push_action_users_in_range = ( DataStore.get_push_action_users_in_range.__func__ ) get_event = DataStore.get_event.__func__ get_events = DataStore.get_events.__func__ get_rooms_for_user_where_membership_is = ( DataStore.get_rooms_for_user_where_membership_is.__func__ ) get_membership_changes_for_user = ( DataStore.get_membership_changes_for_user.__func__ ) get_room_events_max_id = DataStore.get_room_events_max_id.__func__ get_room_events_stream_for_room = ( DataStore.get_room_events_stream_for_room.__func__ ) get_events_around = DataStore.get_events_around.__func__ get_state_for_event = DataStore.get_state_for_event.__func__ get_state_for_events = DataStore.get_state_for_events.__func__ get_state_groups = DataStore.get_state_groups.__func__ get_state_groups_ids = DataStore.get_state_groups_ids.__func__ get_state_ids_for_event = DataStore.get_state_ids_for_event.__func__ get_state_ids_for_events = DataStore.get_state_ids_for_events.__func__ get_joined_users_from_state = DataStore.get_joined_users_from_state.__func__ get_joined_users_from_context = DataStore.get_joined_users_from_context.__func__ _get_joined_users_from_context = ( RoomMemberStore.__dict__["_get_joined_users_from_context"] ) get_joined_hosts = DataStore.get_joined_hosts.__func__ _get_joined_hosts = RoomMemberStore.__dict__["_get_joined_hosts"] get_recent_events_for_room = DataStore.get_recent_events_for_room.__func__ get_room_events_stream_for_rooms = ( DataStore.get_room_events_stream_for_rooms.__func__ ) is_host_joined = RoomMemberStore.__dict__["is_host_joined"] get_stream_token_for_event = DataStore.get_stream_token_for_event.__func__ _set_before_and_after = staticmethod(DataStore._set_before_and_after) _get_events = DataStore._get_events.__func__ _get_events_from_cache = DataStore._get_events_from_cache.__func__ _invalidate_get_event_cache = DataStore._invalidate_get_event_cache.__func__ _enqueue_events = DataStore._enqueue_events.__func__ _do_fetch = DataStore._do_fetch.__func__ _fetch_event_rows = DataStore._fetch_event_rows.__func__ _get_event_from_row = DataStore._get_event_from_row.__func__ _get_rooms_for_user_where_membership_is_txn = ( DataStore._get_rooms_for_user_where_membership_is_txn.__func__ ) _get_state_for_groups = DataStore._get_state_for_groups.__func__ _get_all_state_from_cache = DataStore._get_all_state_from_cache.__func__ _get_events_around_txn = DataStore._get_events_around_txn.__func__ _get_some_state_from_cache = DataStore._get_some_state_from_cache.__func__ get_backfill_events = DataStore.get_backfill_events.__func__ _get_backfill_events = DataStore._get_backfill_events.__func__ get_missing_events = DataStore.get_missing_events.__func__ _get_missing_events = DataStore._get_missing_events.__func__ get_auth_chain = DataStore.get_auth_chain.__func__ get_auth_chain_ids = DataStore.get_auth_chain_ids.__func__ _get_auth_chain_ids_txn = DataStore._get_auth_chain_ids_txn.__func__ get_room_max_stream_ordering = DataStore.get_room_max_stream_ordering.__func__ get_forward_extremeties_for_room = ( DataStore.get_forward_extremeties_for_room.__func__ ) _get_forward_extremeties_for_room = ( EventFederationStore.__dict__["_get_forward_extremeties_for_room"] ) get_all_new_events_stream = DataStore.get_all_new_events_stream.__func__ get_federation_out_pos = DataStore.get_federation_out_pos.__func__ update_federation_out_pos = DataStore.update_federation_out_pos.__func__ def stream_positions(self): result = super(SlavedEventStore, self).stream_positions() result["events"] = self._stream_id_gen.get_current_token() result["backfill"] = -self._backfill_id_gen.get_current_token() return result def process_replication_rows(self, stream_name, token, rows): if stream_name == "events": self._stream_id_gen.advance(token) for row in rows: self.invalidate_caches_for_event( token, row.event_id, row.room_id, row.type, row.state_key, row.redacts, backfilled=False, ) elif stream_name == "backfill": self._backfill_id_gen.advance(-token) for row in rows: self.invalidate_caches_for_event( -token, row.event_id, row.room_id, row.type, row.state_key, row.redacts, backfilled=True, ) return super(SlavedEventStore, self).process_replication_rows( stream_name, token, rows ) def invalidate_caches_for_event(self, stream_ordering, event_id, room_id, etype, state_key, redacts, backfilled): self._invalidate_get_event_cache(event_id) self.get_latest_event_ids_in_room.invalidate((room_id,)) self.get_unread_event_push_actions_by_room_for_user.invalidate_many( (room_id,) ) if not backfilled: self._events_stream_cache.entity_has_changed( room_id, stream_ordering ) if redacts: self._invalidate_get_event_cache(redacts) if etype == EventTypes.Member: self._membership_stream_cache.entity_has_changed( state_key, stream_ordering ) self.get_invited_rooms_for_user.invalidate((state_key,)) synapse-0.24.0/synapse/replication/slave/storage/filtering.py000066400000000000000000000017241317335640100243740ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from synapse.storage.filtering import FilteringStore class SlavedFilteringStore(BaseSlavedStore): def __init__(self, db_conn, hs): super(SlavedFilteringStore, self).__init__(db_conn, hs) # Filters are immutable so this cache doesn't need to be expired get_user_filter = FilteringStore.__dict__["get_user_filter"] synapse-0.24.0/synapse/replication/slave/storage/groups.py000066400000000000000000000041051317335640100237240ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from ._slaved_id_tracker import SlavedIdTracker from synapse.storage import DataStore from synapse.util.caches.stream_change_cache import StreamChangeCache class SlavedGroupServerStore(BaseSlavedStore): def __init__(self, db_conn, hs): super(SlavedGroupServerStore, self).__init__(db_conn, hs) self.hs = hs self._group_updates_id_gen = SlavedIdTracker( db_conn, "local_group_updates", "stream_id", ) self._group_updates_stream_cache = StreamChangeCache( "_group_updates_stream_cache", self._group_updates_id_gen.get_current_token(), ) get_groups_changes_for_user = DataStore.get_groups_changes_for_user.__func__ get_group_stream_token = DataStore.get_group_stream_token.__func__ get_all_groups_for_user = DataStore.get_all_groups_for_user.__func__ def stream_positions(self): result = super(SlavedGroupServerStore, self).stream_positions() result["groups"] = self._group_updates_id_gen.get_current_token() return result def process_replication_rows(self, stream_name, token, rows): if stream_name == "groups": self._group_updates_id_gen.advance(token) for row in rows: self._group_updates_stream_cache.entity_has_changed( row.user_id, token ) return super(SlavedGroupServerStore, self).process_replication_rows( stream_name, token, rows ) synapse-0.24.0/synapse/replication/slave/storage/keys.py000066400000000000000000000024031317335640100233570ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from synapse.storage import DataStore from synapse.storage.keys import KeyStore class SlavedKeyStore(BaseSlavedStore): _get_server_verify_key = KeyStore.__dict__[ "_get_server_verify_key" ] get_server_verify_keys = DataStore.get_server_verify_keys.__func__ store_server_verify_key = DataStore.store_server_verify_key.__func__ get_server_certificate = DataStore.get_server_certificate.__func__ store_server_certificate = DataStore.store_server_certificate.__func__ get_server_keys_json = DataStore.get_server_keys_json.__func__ store_server_keys_json = DataStore.store_server_keys_json.__func__ synapse-0.24.0/synapse/replication/slave/storage/presence.py000066400000000000000000000055101317335640100242120ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from ._slaved_id_tracker import SlavedIdTracker from synapse.util.caches.stream_change_cache import StreamChangeCache from synapse.storage import DataStore from synapse.storage.presence import PresenceStore class SlavedPresenceStore(BaseSlavedStore): def __init__(self, db_conn, hs): super(SlavedPresenceStore, self).__init__(db_conn, hs) self._presence_id_gen = SlavedIdTracker( db_conn, "presence_stream", "stream_id", ) self._presence_on_startup = self._get_active_presence(db_conn) self.presence_stream_cache = self.presence_stream_cache = StreamChangeCache( "PresenceStreamChangeCache", self._presence_id_gen.get_current_token() ) _get_active_presence = DataStore._get_active_presence.__func__ take_presence_startup_info = DataStore.take_presence_startup_info.__func__ _get_presence_for_user = PresenceStore.__dict__["_get_presence_for_user"] get_presence_for_users = PresenceStore.__dict__["get_presence_for_users"] # XXX: This is a bit broken because we don't persist the accepted list in a # way that can be replicated. This means that we don't have a way to # invalidate the cache correctly. get_presence_list_accepted = PresenceStore.__dict__[ "get_presence_list_accepted" ] get_presence_list_observers_accepted = PresenceStore.__dict__[ "get_presence_list_observers_accepted" ] def get_current_presence_token(self): return self._presence_id_gen.get_current_token() def stream_positions(self): result = super(SlavedPresenceStore, self).stream_positions() position = self._presence_id_gen.get_current_token() result["presence"] = position return result def process_replication_rows(self, stream_name, token, rows): if stream_name == "presence": self._presence_id_gen.advance(token) for row in rows: self.presence_stream_cache.entity_has_changed( row.user_id, token ) self._get_presence_for_user.invalidate((row.user_id,)) return super(SlavedPresenceStore, self).process_replication_rows( stream_name, token, rows ) synapse-0.24.0/synapse/replication/slave/storage/push_rule.py000066400000000000000000000050251317335640100244150ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from .events import SlavedEventStore from ._slaved_id_tracker import SlavedIdTracker from synapse.storage import DataStore from synapse.storage.push_rule import PushRuleStore from synapse.util.caches.stream_change_cache import StreamChangeCache class SlavedPushRuleStore(SlavedEventStore): def __init__(self, db_conn, hs): super(SlavedPushRuleStore, self).__init__(db_conn, hs) self._push_rules_stream_id_gen = SlavedIdTracker( db_conn, "push_rules_stream", "stream_id", ) self.push_rules_stream_cache = StreamChangeCache( "PushRulesStreamChangeCache", self._push_rules_stream_id_gen.get_current_token(), ) get_push_rules_for_user = PushRuleStore.__dict__["get_push_rules_for_user"] get_push_rules_enabled_for_user = ( PushRuleStore.__dict__["get_push_rules_enabled_for_user"] ) have_push_rules_changed_for_user = ( DataStore.have_push_rules_changed_for_user.__func__ ) def get_push_rules_stream_token(self): return ( self._push_rules_stream_id_gen.get_current_token(), self._stream_id_gen.get_current_token(), ) def stream_positions(self): result = super(SlavedPushRuleStore, self).stream_positions() result["push_rules"] = self._push_rules_stream_id_gen.get_current_token() return result def process_replication_rows(self, stream_name, token, rows): if stream_name == "push_rules": self._push_rules_stream_id_gen.advance(token) for row in rows: self.get_push_rules_for_user.invalidate((row.user_id,)) self.get_push_rules_enabled_for_user.invalidate((row.user_id,)) self.push_rules_stream_cache.entity_has_changed( row.user_id, token ) return super(SlavedPushRuleStore, self).process_replication_rows( stream_name, token, rows ) synapse-0.24.0/synapse/replication/slave/storage/pushers.py000066400000000000000000000033601317335640100241000ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from ._slaved_id_tracker import SlavedIdTracker from synapse.storage import DataStore class SlavedPusherStore(BaseSlavedStore): def __init__(self, db_conn, hs): super(SlavedPusherStore, self).__init__(db_conn, hs) self._pushers_id_gen = SlavedIdTracker( db_conn, "pushers", "id", extra_tables=[("deleted_pushers", "stream_id")], ) get_all_pushers = DataStore.get_all_pushers.__func__ get_pushers_by = DataStore.get_pushers_by.__func__ get_pushers_by_app_id_and_pushkey = ( DataStore.get_pushers_by_app_id_and_pushkey.__func__ ) _decode_pushers_rows = DataStore._decode_pushers_rows.__func__ def stream_positions(self): result = super(SlavedPusherStore, self).stream_positions() result["pushers"] = self._pushers_id_gen.get_current_token() return result def process_replication_rows(self, stream_name, token, rows): if stream_name == "pushers": self._pushers_id_gen.advance(token) return super(SlavedPusherStore, self).process_replication_rows( stream_name, token, rows ) synapse-0.24.0/synapse/replication/slave/storage/receipts.py000066400000000000000000000065651317335640100242370ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from ._slaved_id_tracker import SlavedIdTracker from synapse.storage import DataStore from synapse.storage.receipts import ReceiptsStore from synapse.util.caches.stream_change_cache import StreamChangeCache # So, um, we want to borrow a load of functions intended for reading from # a DataStore, but we don't want to take functions that either write to the # DataStore or are cached and don't have cache invalidation logic. # # Rather than write duplicate versions of those functions, or lift them to # a common base class, we going to grab the underlying __func__ object from # the method descriptor on the DataStore and chuck them into our class. class SlavedReceiptsStore(BaseSlavedStore): def __init__(self, db_conn, hs): super(SlavedReceiptsStore, self).__init__(db_conn, hs) self._receipts_id_gen = SlavedIdTracker( db_conn, "receipts_linearized", "stream_id" ) self._receipts_stream_cache = StreamChangeCache( "ReceiptsRoomChangeCache", self._receipts_id_gen.get_current_token() ) get_receipts_for_user = ReceiptsStore.__dict__["get_receipts_for_user"] get_linearized_receipts_for_room = ( ReceiptsStore.__dict__["get_linearized_receipts_for_room"] ) _get_linearized_receipts_for_rooms = ( ReceiptsStore.__dict__["_get_linearized_receipts_for_rooms"] ) get_last_receipt_event_id_for_user = ( ReceiptsStore.__dict__["get_last_receipt_event_id_for_user"] ) get_max_receipt_stream_id = DataStore.get_max_receipt_stream_id.__func__ get_all_updated_receipts = DataStore.get_all_updated_receipts.__func__ get_linearized_receipts_for_rooms = ( DataStore.get_linearized_receipts_for_rooms.__func__ ) def stream_positions(self): result = super(SlavedReceiptsStore, self).stream_positions() result["receipts"] = self._receipts_id_gen.get_current_token() return result def invalidate_caches_for_receipt(self, room_id, receipt_type, user_id): self.get_receipts_for_user.invalidate((user_id, receipt_type)) self.get_linearized_receipts_for_room.invalidate_many((room_id,)) self.get_last_receipt_event_id_for_user.invalidate( (user_id, room_id, receipt_type) ) def process_replication_rows(self, stream_name, token, rows): if stream_name == "receipts": self._receipts_id_gen.advance(token) for row in rows: self.invalidate_caches_for_receipt( row.room_id, row.receipt_type, row.user_id ) self._receipts_stream_cache.entity_has_changed(row.room_id, token) return super(SlavedReceiptsStore, self).process_replication_rows( stream_name, token, rows ) synapse-0.24.0/synapse/replication/slave/storage/registration.py000066400000000000000000000022571317335640100251250ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from synapse.storage import DataStore from synapse.storage.registration import RegistrationStore class SlavedRegistrationStore(BaseSlavedStore): def __init__(self, db_conn, hs): super(SlavedRegistrationStore, self).__init__(db_conn, hs) # TODO: use the cached version and invalidate deleted tokens get_user_by_access_token = RegistrationStore.__dict__[ "get_user_by_access_token" ] _query_for_auth = DataStore._query_for_auth.__func__ get_user_by_id = RegistrationStore.__dict__[ "get_user_by_id" ] synapse-0.24.0/synapse/replication/slave/storage/room.py000066400000000000000000000040161317335640100233620ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from synapse.storage import DataStore from synapse.storage.room import RoomStore from ._slaved_id_tracker import SlavedIdTracker class RoomStore(BaseSlavedStore): def __init__(self, db_conn, hs): super(RoomStore, self).__init__(db_conn, hs) self._public_room_id_gen = SlavedIdTracker( db_conn, "public_room_list_stream", "stream_id" ) get_public_room_ids = DataStore.get_public_room_ids.__func__ get_current_public_room_stream_id = ( DataStore.get_current_public_room_stream_id.__func__ ) get_public_room_ids_at_stream_id = ( RoomStore.__dict__["get_public_room_ids_at_stream_id"] ) get_public_room_ids_at_stream_id_txn = ( DataStore.get_public_room_ids_at_stream_id_txn.__func__ ) get_published_at_stream_id_txn = ( DataStore.get_published_at_stream_id_txn.__func__ ) get_public_room_changes = DataStore.get_public_room_changes.__func__ def stream_positions(self): result = super(RoomStore, self).stream_positions() result["public_rooms"] = self._public_room_id_gen.get_current_token() return result def process_replication_rows(self, stream_name, token, rows): if stream_name == "public_rooms": self._public_room_id_gen.advance(token) return super(RoomStore, self).process_replication_rows( stream_name, token, rows ) synapse-0.24.0/synapse/replication/slave/storage/transactions.py000066400000000000000000000023721317335640100251210ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import BaseSlavedStore from synapse.storage import DataStore from synapse.storage.transactions import TransactionStore class TransactionStore(BaseSlavedStore): get_destination_retry_timings = TransactionStore.__dict__[ "get_destination_retry_timings" ] _get_destination_retry_timings = DataStore._get_destination_retry_timings.__func__ set_destination_retry_timings = DataStore.set_destination_retry_timings.__func__ _set_destination_retry_timings = DataStore._set_destination_retry_timings.__func__ prep_send_transaction = DataStore.prep_send_transaction.__func__ delivered_txn = DataStore.delivered_txn.__func__ synapse-0.24.0/synapse/replication/tcp/000077500000000000000000000000001317335640100200435ustar00rootroot00000000000000synapse-0.24.0/synapse/replication/tcp/__init__.py000066400000000000000000000023421317335640100221550ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module implements the TCP replication protocol used by synapse to communicate between the master process and its workers (when they're enabled). Further details can be found in docs/tcp_replication.rst Structure of the module: * client.py - the client classes used for workers to connect to master * command.py - the definitions of all the valid commands * protocol.py - contains bot the client and server protocol implementations, these should not be used directly * resource.py - the server classes that accepts and handle client connections * streams.py - the definitons of all the valid streams """ synapse-0.24.0/synapse/replication/tcp/client.py000066400000000000000000000163631317335640100217040ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """A replication client for use by synapse workers. """ from twisted.internet import reactor, defer from twisted.internet.protocol import ReconnectingClientFactory from .commands import ( FederationAckCommand, UserSyncCommand, RemovePusherCommand, InvalidateCacheCommand, UserIpCommand, ) from .protocol import ClientReplicationStreamProtocol import logging logger = logging.getLogger(__name__) class ReplicationClientFactory(ReconnectingClientFactory): """Factory for building connections to the master. Will reconnect if the connection is lost. Accepts a handler that will be called when new data is available or data is required. """ maxDelay = 5 # Try at least once every N seconds def __init__(self, hs, client_name, handler): self.client_name = client_name self.handler = handler self.server_name = hs.config.server_name self._clock = hs.get_clock() # As self.clock is defined in super class reactor.addSystemEventTrigger("before", "shutdown", self.stopTrying) def startedConnecting(self, connector): logger.info("Connecting to replication: %r", connector.getDestination()) def buildProtocol(self, addr): logger.info("Connected to replication: %r", addr) self.resetDelay() return ClientReplicationStreamProtocol( self.client_name, self.server_name, self._clock, self.handler ) def clientConnectionLost(self, connector, reason): logger.error("Lost replication conn: %r", reason) ReconnectingClientFactory.clientConnectionLost(self, connector, reason) def clientConnectionFailed(self, connector, reason): logger.error("Failed to connect to replication: %r", reason) ReconnectingClientFactory.clientConnectionFailed( self, connector, reason ) class ReplicationClientHandler(object): """A base handler that can be passed to the ReplicationClientFactory. By default proxies incoming replication data to the SlaveStore. """ def __init__(self, store): self.store = store # The current connection. None if we are currently (re)connecting self.connection = None # Any pending commands to be sent once a new connection has been # established self.pending_commands = [] # Map from string -> deferred, to wake up when receiveing a SYNC with # the given string. # Used for tests. self.awaiting_syncs = {} def start_replication(self, hs): """Helper method to start a replication connection to the remote server using TCP. """ client_name = hs.config.worker_name factory = ReplicationClientFactory(hs, client_name, self) host = hs.config.worker_replication_host port = hs.config.worker_replication_port reactor.connectTCP(host, port, factory) def on_rdata(self, stream_name, token, rows): """Called when we get new replication data. By default this just pokes the slave store. Can be overriden in subclasses to handle more. """ logger.info("Received rdata %s -> %s", stream_name, token) self.store.process_replication_rows(stream_name, token, rows) def on_position(self, stream_name, token): """Called when we get new position data. By default this just pokes the slave store. Can be overriden in subclasses to handle more. """ self.store.process_replication_rows(stream_name, token, []) def on_sync(self, data): """When we received a SYNC we wake up any deferreds that were waiting for the sync with the given data. Used by tests. """ d = self.awaiting_syncs.pop(data, None) if d: d.callback(data) def get_streams_to_replicate(self): """Called when a new connection has been established and we need to subscribe to streams. Returns a dictionary of stream name to token. """ args = self.store.stream_positions() user_account_data = args.pop("user_account_data", None) room_account_data = args.pop("room_account_data", None) if user_account_data: args["account_data"] = user_account_data elif room_account_data: args["account_data"] = room_account_data return args def get_currently_syncing_users(self): """Get the list of currently syncing users (if any). This is called when a connection has been established and we need to send the currently syncing users. (Overriden by the synchrotron's only) """ return [] def send_command(self, cmd): """Send a command to master (when we get establish a connection if we don't have one already.) """ if self.connection: self.connection.send_command(cmd) else: logger.warn("Queuing command as not connected: %r", cmd.NAME) self.pending_commands.append(cmd) def send_federation_ack(self, token): """Ack data for the federation stream. This allows the master to drop data stored purely in memory. """ self.send_command(FederationAckCommand(token)) def send_user_sync(self, user_id, is_syncing, last_sync_ms): """Poke the master that a user has started/stopped syncing. """ self.send_command(UserSyncCommand(user_id, is_syncing, last_sync_ms)) def send_remove_pusher(self, app_id, push_key, user_id): """Poke the master to remove a pusher for a user """ cmd = RemovePusherCommand(app_id, push_key, user_id) self.send_command(cmd) def send_invalidate_cache(self, cache_func, keys): """Poke the master to invalidate a cache. """ cmd = InvalidateCacheCommand(cache_func.__name__, keys) self.send_command(cmd) def send_user_ip(self, user_id, access_token, ip, user_agent, device_id, last_seen): """Tell the master that the user made a request. """ cmd = UserIpCommand(user_id, access_token, ip, user_agent, device_id, last_seen) self.send_command(cmd) def await_sync(self, data): """Returns a deferred that is resolved when we receive a SYNC command with given data. Used by tests. """ return self.awaiting_syncs.setdefault(data, defer.Deferred()) def update_connection(self, connection): """Called when a connection has been established (or lost with None). """ self.connection = connection if connection: for cmd in self.pending_commands: connection.send_command(cmd) self.pending_commands = [] synapse-0.24.0/synapse/replication/tcp/commands.py000066400000000000000000000236611317335640100222260ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Defines the various valid commands The VALID_SERVER_COMMANDS and VALID_CLIENT_COMMANDS define which commands are allowed to be sent by which side. """ import logging import ujson as json logger = logging.getLogger(__name__) class Command(object): """The base command class. All subclasses must set the NAME variable which equates to the name of the command on the wire. A full command line on the wire is constructed from `NAME + " " + to_line()` The default implementation creates a command of form ` ` """ NAME = None def __init__(self, data): self.data = data @classmethod def from_line(cls, line): """Deserialises a line from the wire into this command. `line` does not include the command. """ return cls(line) def to_line(self): """Serialises the comamnd for the wire. Does not include the command prefix. """ return self.data class ServerCommand(Command): """Sent by the server on new connection and includes the server_name. Format:: SERVER """ NAME = "SERVER" class RdataCommand(Command): """Sent by server when a subscribed stream has an update. Format:: RDATA The `` may either be a numeric stream id OR "batch". The latter case is used to support sending multiple updates with the same stream ID. This is done by sending an RDATA for each row, with all but the last RDATA having a token of "batch" and the last having the final stream ID. The client should batch all incoming RDATA with a token of "batch" (per stream_name) until it sees an RDATA with a numeric stream ID. `` of "batch" maps to the instance variable `token` being None. An example of a batched series of RDATA:: RDATA presence batch ["@foo:example.com", "online", ...] RDATA presence batch ["@bar:example.com", "online", ...] RDATA presence 59 ["@baz:example.com", "online", ...] """ NAME = "RDATA" def __init__(self, stream_name, token, row): self.stream_name = stream_name self.token = token self.row = row @classmethod def from_line(cls, line): stream_name, token, row_json = line.split(" ", 2) return cls( stream_name, None if token == "batch" else int(token), json.loads(row_json) ) def to_line(self): return " ".join(( self.stream_name, str(self.token) if self.token is not None else "batch", json.dumps(self.row), )) class PositionCommand(Command): """Sent by the client to tell the client the stream postition without needing to send an RDATA. """ NAME = "POSITION" def __init__(self, stream_name, token): self.stream_name = stream_name self.token = token @classmethod def from_line(cls, line): stream_name, token = line.split(" ", 1) return cls(stream_name, int(token)) def to_line(self): return " ".join((self.stream_name, str(self.token),)) class ErrorCommand(Command): """Sent by either side if there was an ERROR. The data is a string describing the error. """ NAME = "ERROR" class PingCommand(Command): """Sent by either side as a keep alive. The data is arbitary (often timestamp) """ NAME = "PING" class NameCommand(Command): """Sent by client to inform the server of the client's identity. The data is the name """ NAME = "NAME" class ReplicateCommand(Command): """Sent by the client to subscribe to the stream. Format:: REPLICATE Where may be either: * a numeric stream_id to stream updates from * "NOW" to stream all subsequent updates. The can be "ALL" to subscribe to all known streams, in which case the must be set to "NOW", i.e.:: REPLICATE ALL NOW """ NAME = "REPLICATE" def __init__(self, stream_name, token): self.stream_name = stream_name self.token = token @classmethod def from_line(cls, line): stream_name, token = line.split(" ", 1) if token in ("NOW", "now"): token = "NOW" else: token = int(token) return cls(stream_name, token) def to_line(self): return " ".join((self.stream_name, str(self.token),)) class UserSyncCommand(Command): """Sent by the client to inform the server that a user has started or stopped syncing. Used to calculate presence on the master. Includes a timestamp of when the last user sync was. Format:: USER_SYNC Where is either "start" or "stop" """ NAME = "USER_SYNC" def __init__(self, user_id, is_syncing, last_sync_ms): self.user_id = user_id self.is_syncing = is_syncing self.last_sync_ms = last_sync_ms @classmethod def from_line(cls, line): user_id, state, last_sync_ms = line.split(" ", 2) if state not in ("start", "end"): raise Exception("Invalid USER_SYNC state %r" % (state,)) return cls(user_id, state == "start", int(last_sync_ms)) def to_line(self): return " ".join(( self.user_id, "start" if self.is_syncing else "end", str(self.last_sync_ms), )) class FederationAckCommand(Command): """Sent by the client when it has processed up to a given point in the federation stream. This allows the master to drop in-memory caches of the federation stream. This must only be sent from one worker (i.e. the one sending federation) Format:: FEDERATION_ACK """ NAME = "FEDERATION_ACK" def __init__(self, token): self.token = token @classmethod def from_line(cls, line): return cls(int(line)) def to_line(self): return str(self.token) class SyncCommand(Command): """Used for testing. The client protocol implementation allows waiting on a SYNC command with a specified data. """ NAME = "SYNC" class RemovePusherCommand(Command): """Sent by the client to request the master remove the given pusher. Format:: REMOVE_PUSHER """ NAME = "REMOVE_PUSHER" def __init__(self, app_id, push_key, user_id): self.user_id = user_id self.app_id = app_id self.push_key = push_key @classmethod def from_line(cls, line): app_id, push_key, user_id = line.split(" ", 2) return cls(app_id, push_key, user_id) def to_line(self): return " ".join((self.app_id, self.push_key, self.user_id)) class InvalidateCacheCommand(Command): """Sent by the client to invalidate an upstream cache. THIS IS NOT RELIABLE, AND SHOULD *NOT* BE USED ACCEPT FOR THINGS THAT ARE NOT DISASTROUS IF WE DROP ON THE FLOOR. Mainly used to invalidate destination retry timing caches. Format:: INVALIDATE_CACHE Where is a json list. """ NAME = "INVALIDATE_CACHE" def __init__(self, cache_func, keys): self.cache_func = cache_func self.keys = keys @classmethod def from_line(cls, line): cache_func, keys_json = line.split(" ", 1) return cls(cache_func, json.loads(keys_json)) def to_line(self): return " ".join((self.cache_func, json.dumps(self.keys))) class UserIpCommand(Command): """Sent periodically when a worker sees activity from a client. Format:: USER_IP , , , , , """ NAME = "USER_IP" def __init__(self, user_id, access_token, ip, user_agent, device_id, last_seen): self.user_id = user_id self.access_token = access_token self.ip = ip self.user_agent = user_agent self.device_id = device_id self.last_seen = last_seen @classmethod def from_line(cls, line): user_id, jsn = line.split(" ", 1) access_token, ip, user_agent, device_id, last_seen = json.loads(jsn) return cls( user_id, access_token, ip, user_agent, device_id, last_seen ) def to_line(self): return self.user_id + " " + json.dumps(( self.access_token, self.ip, self.user_agent, self.device_id, self.last_seen, )) # Map of command name to command type. COMMAND_MAP = { cmd.NAME: cmd for cmd in ( ServerCommand, RdataCommand, PositionCommand, ErrorCommand, PingCommand, NameCommand, ReplicateCommand, UserSyncCommand, FederationAckCommand, SyncCommand, RemovePusherCommand, InvalidateCacheCommand, UserIpCommand, ) } # The commands the server is allowed to send VALID_SERVER_COMMANDS = ( ServerCommand.NAME, RdataCommand.NAME, PositionCommand.NAME, ErrorCommand.NAME, PingCommand.NAME, SyncCommand.NAME, ) # The commands the client is allowed to send VALID_CLIENT_COMMANDS = ( NameCommand.NAME, ReplicateCommand.NAME, PingCommand.NAME, UserSyncCommand.NAME, FederationAckCommand.NAME, RemovePusherCommand.NAME, InvalidateCacheCommand.NAME, UserIpCommand.NAME, ErrorCommand.NAME, ) synapse-0.24.0/synapse/replication/tcp/protocol.py000066400000000000000000000531451317335640100222660ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module contains the implementation of both the client and server protocols. The basic structure of the protocol is line based, where the initial word of each line specifies the command. The rest of the line is parsed based on the command. For example, the `RDATA` command is defined as:: RDATA (Note that `` may contains spaces, but cannot contain newlines.) Blank lines are ignored. # Example An example iteraction is shown below. Each line is prefixed with '>' or '<' to indicate which side is sending, these are *not* included on the wire:: * connection established * > SERVER localhost:8823 > PING 1490197665618 < NAME synapse.app.appservice < PING 1490197665618 < REPLICATE events 1 < REPLICATE backfill 1 < REPLICATE caches 1 > POSITION events 1 > POSITION backfill 1 > POSITION caches 1 > RDATA caches 2 ["get_user_by_id",["@01register-user:localhost:8823"],1490197670513] > RDATA events 14 ["$149019767112vOHxz:localhost:8823", "!AFDCvgApUmpdfVjIXm:localhost:8823","m.room.guest_access","",null] < PING 1490197675618 > ERROR server stopping * connection closed by server * """ from twisted.internet import defer from twisted.protocols.basic import LineOnlyReceiver from twisted.python.failure import Failure from commands import ( COMMAND_MAP, VALID_CLIENT_COMMANDS, VALID_SERVER_COMMANDS, ErrorCommand, ServerCommand, RdataCommand, PositionCommand, PingCommand, NameCommand, ReplicateCommand, UserSyncCommand, SyncCommand, ) from streams import STREAMS_MAP from synapse.util.stringutils import random_string from synapse.metrics.metric import CounterMetric import logging import synapse.metrics import struct import fcntl metrics = synapse.metrics.get_metrics_for(__name__) connection_close_counter = metrics.register_counter( "close_reason", labels=["reason_type"], ) # A list of all connected protocols. This allows us to send metrics about the # connections. connected_connections = [] logger = logging.getLogger(__name__) PING_TIME = 5000 PING_TIMEOUT_MULTIPLIER = 5 PING_TIMEOUT_MS = PING_TIME * PING_TIMEOUT_MULTIPLIER class ConnectionStates(object): CONNECTING = "connecting" ESTABLISHED = "established" PAUSED = "paused" CLOSED = "closed" class BaseReplicationStreamProtocol(LineOnlyReceiver): """Base replication protocol shared between client and server. Reads lines (ignoring blank ones) and parses them into command classes, asserting that they are valid for the given direction, i.e. server commands are only sent by the server. On receiving a new command it calls `on_` with the parsed command. It also sends `PING` periodically, and correctly times out remote connections (if they send a `PING` command) """ delimiter = b'\n' VALID_INBOUND_COMMANDS = [] # Valid commands we expect to receive VALID_OUTBOUND_COMMANDS = [] # Valid commans we can send max_line_buffer = 10000 def __init__(self, clock): self.clock = clock self.last_received_command = self.clock.time_msec() self.last_sent_command = 0 self.time_we_closed = None # When we requested the connection be closed self.received_ping = False # Have we reecived a ping from the other side self.state = ConnectionStates.CONNECTING self.name = "anon" # The name sent by a client. self.conn_id = random_string(5) # To dedupe in case of name clashes. # List of pending commands to send once we've established the connection self.pending_commands = [] # The LoopingCall for sending pings. self._send_ping_loop = None self.inbound_commands_counter = CounterMetric( "inbound_commands", labels=["command"], ) self.outbound_commands_counter = CounterMetric( "outbound_commands", labels=["command"], ) def connectionMade(self): logger.info("[%s] Connection established", self.id()) self.state = ConnectionStates.ESTABLISHED connected_connections.append(self) # Register connection for metrics self.transport.registerProducer(self, True) # For the *Producing callbacks self._send_pending_commands() # Starts sending pings self._send_ping_loop = self.clock.looping_call(self.send_ping, 5000) # Always send the initial PING so that the other side knows that they # can time us out. self.send_command(PingCommand(self.clock.time_msec())) def send_ping(self): """Periodically sends a ping and checks if we should close the connection due to the other side timing out. """ now = self.clock.time_msec() if self.time_we_closed: if now - self.time_we_closed > PING_TIMEOUT_MS: logger.info( "[%s] Failed to close connection gracefully, aborting", self.id() ) self.transport.abortConnection() else: if now - self.last_sent_command >= PING_TIME: self.send_command(PingCommand(now)) if self.received_ping and now - self.last_received_command > PING_TIMEOUT_MS: logger.info( "[%s] Connection hasn't received command in %r ms. Closing.", self.id(), now - self.last_received_command ) self.send_error("ping timeout") def lineReceived(self, line): """Called when we've received a line """ if line.strip() == "": # Ignore blank lines return line = line.decode("utf-8") cmd_name, rest_of_line = line.split(" ", 1) if cmd_name not in self.VALID_INBOUND_COMMANDS: logger.error("[%s] invalid command %s", self.id(), cmd_name) self.send_error("invalid command: %s", cmd_name) return self.last_received_command = self.clock.time_msec() self.inbound_commands_counter.inc(cmd_name) cmd_cls = COMMAND_MAP[cmd_name] try: cmd = cmd_cls.from_line(rest_of_line) except Exception as e: logger.exception( "[%s] failed to parse line %r: %r", self.id(), cmd_name, rest_of_line ) self.send_error( "failed to parse line for %r: %r (%r):" % (cmd_name, e, rest_of_line) ) return # Now lets try and call on_ function try: getattr(self, "on_%s" % (cmd_name,))(cmd) except Exception: logger.exception("[%s] Failed to handle line: %r", self.id(), line) def close(self): logger.warn("[%s] Closing connection", self.id()) self.time_we_closed = self.clock.time_msec() self.transport.loseConnection() self.on_connection_closed() def send_error(self, error_string, *args): """Send an error to remote and close the connection. """ self.send_command(ErrorCommand(error_string % args)) self.close() def send_command(self, cmd, do_buffer=True): """Send a command if connection has been established. Args: cmd (Command) do_buffer (bool): Whether to buffer the message or always attempt to send the command. This is mostly used to send an error message if we're about to close the connection due our buffers becoming full. """ if self.state == ConnectionStates.CLOSED: logger.debug("[%s] Not sending, connection closed", self.id()) return if do_buffer and self.state != ConnectionStates.ESTABLISHED: self._queue_command(cmd) return self.outbound_commands_counter.inc(cmd.NAME) string = "%s %s" % (cmd.NAME, cmd.to_line(),) if "\n" in string: raise Exception("Unexpected newline in command: %r", string) self.sendLine(string.encode("utf-8")) self.last_sent_command = self.clock.time_msec() def _queue_command(self, cmd): """Queue the command until the connection is ready to write to again. """ logger.debug("[%s] Queing as conn %r, cmd: %r", self.id(), self.state, cmd) self.pending_commands.append(cmd) if len(self.pending_commands) > self.max_line_buffer: # The other side is failing to keep up and out buffers are becoming # full, so lets close the connection. # XXX: should we squawk more loudly? logger.error("[%s] Remote failed to keep up", self.id()) self.send_command(ErrorCommand("Failed to keep up"), do_buffer=False) self.close() def _send_pending_commands(self): """Send any queued commandes """ pending = self.pending_commands self.pending_commands = [] for cmd in pending: self.send_command(cmd) def on_PING(self, line): self.received_ping = True def on_ERROR(self, cmd): logger.error("[%s] Remote reported error: %r", self.id(), cmd.data) def pauseProducing(self): """This is called when both the kernel send buffer and the twisted tcp connection send buffers have become full. We don't actually have any control over those sizes, so we buffer some commands ourselves before knifing the connection due to the remote failing to keep up. """ logger.info("[%s] Pause producing", self.id()) self.state = ConnectionStates.PAUSED def resumeProducing(self): """The remote has caught up after we started buffering! """ logger.info("[%s] Resume producing", self.id()) self.state = ConnectionStates.ESTABLISHED self._send_pending_commands() def stopProducing(self): """We're never going to send any more data (normally because either we or the remote has closed the connection) """ logger.info("[%s] Stop producing", self.id()) self.on_connection_closed() def connectionLost(self, reason): logger.info("[%s] Replication connection closed: %r", self.id(), reason) if isinstance(reason, Failure): connection_close_counter.inc(reason.type.__name__) else: connection_close_counter.inc(reason.__class__.__name__) try: # Remove us from list of connections to be monitored connected_connections.remove(self) except ValueError: pass # Stop the looping call sending pings. if self._send_ping_loop and self._send_ping_loop.running: self._send_ping_loop.stop() self.on_connection_closed() def on_connection_closed(self): logger.info("[%s] Connection was closed", self.id()) self.state = ConnectionStates.CLOSED self.pending_commands = [] if self.transport: self.transport.unregisterProducer() def __str__(self): return "ReplicationConnection" % ( self.name, self.conn_id, self.addr, ) def id(self): return "%s-%s" % (self.name, self.conn_id) class ServerReplicationStreamProtocol(BaseReplicationStreamProtocol): VALID_INBOUND_COMMANDS = VALID_CLIENT_COMMANDS VALID_OUTBOUND_COMMANDS = VALID_SERVER_COMMANDS def __init__(self, server_name, clock, streamer, addr): BaseReplicationStreamProtocol.__init__(self, clock) # Old style class self.server_name = server_name self.streamer = streamer self.addr = addr # The streams the client has subscribed to and is up to date with self.replication_streams = set() # The streams the client is currently subscribing to. self.connecting_streams = set() # Map from stream name to list of updates to send once we've finished # subscribing the client to the stream. self.pending_rdata = {} def connectionMade(self): self.send_command(ServerCommand(self.server_name)) BaseReplicationStreamProtocol.connectionMade(self) self.streamer.new_connection(self) def on_NAME(self, cmd): logger.info("[%s] Renamed to %r", self.id(), cmd.data) self.name = cmd.data def on_USER_SYNC(self, cmd): self.streamer.on_user_sync( self.conn_id, cmd.user_id, cmd.is_syncing, cmd.last_sync_ms, ) def on_REPLICATE(self, cmd): stream_name = cmd.stream_name token = cmd.token if stream_name == "ALL": # Subscribe to all streams we're publishing to. for stream in self.streamer.streams_by_name.iterkeys(): self.subscribe_to_stream(stream, token) else: self.subscribe_to_stream(stream_name, token) def on_FEDERATION_ACK(self, cmd): self.streamer.federation_ack(cmd.token) def on_REMOVE_PUSHER(self, cmd): self.streamer.on_remove_pusher(cmd.app_id, cmd.push_key, cmd.user_id) def on_INVALIDATE_CACHE(self, cmd): self.streamer.on_invalidate_cache(cmd.cache_func, cmd.keys) def on_USER_IP(self, cmd): self.streamer.on_user_ip( cmd.user_id, cmd.access_token, cmd.ip, cmd.user_agent, cmd.device_id, cmd.last_seen, ) @defer.inlineCallbacks def subscribe_to_stream(self, stream_name, token): """Subscribe the remote to a streams. This invloves checking if they've missed anything and sending those updates down if they have. During that time new updates for the stream are queued and sent once we've sent down any missed updates. """ self.replication_streams.discard(stream_name) self.connecting_streams.add(stream_name) try: # Get missing updates updates, current_token = yield self.streamer.get_stream_updates( stream_name, token, ) # Send all the missing updates for update in updates: token, row = update[0], update[1] self.send_command(RdataCommand(stream_name, token, row)) # We send a POSITION command to ensure that they have an up to # date token (especially useful if we didn't send any updates # above) self.send_command(PositionCommand(stream_name, current_token)) # Now we can send any updates that came in while we were subscribing pending_rdata = self.pending_rdata.pop(stream_name, []) for token, update in pending_rdata: # Only send updates newer than the current token if token > current_token: self.send_command(RdataCommand(stream_name, token, update)) # They're now fully subscribed self.replication_streams.add(stream_name) except Exception as e: logger.exception("[%s] Failed to handle REPLICATE command", self.id()) self.send_error("failed to handle replicate: %r", e) finally: self.connecting_streams.discard(stream_name) def stream_update(self, stream_name, token, data): """Called when a new update is available to stream to clients. We need to check if the client is interested in the stream or not """ if stream_name in self.replication_streams: # The client is subscribed to the stream self.send_command(RdataCommand(stream_name, token, data)) elif stream_name in self.connecting_streams: # The client is being subscribed to the stream logger.debug("[%s] Queuing RDATA %r %r", self.id(), stream_name, token) self.pending_rdata.setdefault(stream_name, []).append((token, data)) else: # The client isn't subscribed logger.debug("[%s] Dropping RDATA %r %r", self.id(), stream_name, token) def send_sync(self, data): self.send_command(SyncCommand(data)) def on_connection_closed(self): BaseReplicationStreamProtocol.on_connection_closed(self) self.streamer.lost_connection(self) class ClientReplicationStreamProtocol(BaseReplicationStreamProtocol): VALID_INBOUND_COMMANDS = VALID_SERVER_COMMANDS VALID_OUTBOUND_COMMANDS = VALID_CLIENT_COMMANDS def __init__(self, client_name, server_name, clock, handler): BaseReplicationStreamProtocol.__init__(self, clock) self.client_name = client_name self.server_name = server_name self.handler = handler # Map of stream to batched updates. See RdataCommand for info on how # batching works. self.pending_batches = {} def connectionMade(self): self.send_command(NameCommand(self.client_name)) BaseReplicationStreamProtocol.connectionMade(self) # Once we've connected subscribe to the necessary streams for stream_name, token in self.handler.get_streams_to_replicate().iteritems(): self.replicate(stream_name, token) # Tell the server if we have any users currently syncing (should only # happen on synchrotrons) currently_syncing = self.handler.get_currently_syncing_users() now = self.clock.time_msec() for user_id in currently_syncing: self.send_command(UserSyncCommand(user_id, True, now)) # We've now finished connecting to so inform the client handler self.handler.update_connection(self) def on_SERVER(self, cmd): if cmd.data != self.server_name: logger.error("[%s] Connected to wrong remote: %r", self.id(), cmd.data) self.send_error("Wrong remote") def on_RDATA(self, cmd): try: row = STREAMS_MAP[cmd.stream_name].ROW_TYPE(*cmd.row) except Exception: logger.exception( "[%s] Failed to parse RDATA: %r %r", self.id(), cmd.stream_name, cmd.row ) raise if cmd.token is None: # I.e. this is part of a batch of updates for this stream. Batch # until we get an update for the stream with a non None token self.pending_batches.setdefault(cmd.stream_name, []).append(row) else: # Check if this is the last of a batch of updates rows = self.pending_batches.pop(cmd.stream_name, []) rows.append(row) self.handler.on_rdata(cmd.stream_name, cmd.token, rows) def on_POSITION(self, cmd): self.handler.on_position(cmd.stream_name, cmd.token) def on_SYNC(self, cmd): self.handler.on_sync(cmd.data) def replicate(self, stream_name, token): """Send the subscription request to the server """ if stream_name not in STREAMS_MAP: raise Exception("Invalid stream name %r" % (stream_name,)) logger.info( "[%s] Subscribing to replication stream: %r from %r", self.id(), stream_name, token ) self.send_command(ReplicateCommand(stream_name, token)) def on_connection_closed(self): BaseReplicationStreamProtocol.on_connection_closed(self) self.handler.update_connection(None) # The following simply registers metrics for the replication connections metrics.register_callback( "pending_commands", lambda: { (p.name, p.conn_id): len(p.pending_commands) for p in connected_connections }, labels=["name", "conn_id"], ) def transport_buffer_size(protocol): if protocol.transport: size = len(protocol.transport.dataBuffer) + protocol.transport._tempDataLen return size return 0 metrics.register_callback( "transport_send_buffer", lambda: { (p.name, p.conn_id): transport_buffer_size(p) for p in connected_connections }, labels=["name", "conn_id"], ) def transport_kernel_read_buffer_size(protocol, read=True): SIOCINQ = 0x541B SIOCOUTQ = 0x5411 if protocol.transport: fileno = protocol.transport.getHandle().fileno() if read: op = SIOCINQ else: op = SIOCOUTQ size = struct.unpack("I", fcntl.ioctl(fileno, op, '\0\0\0\0'))[0] return size return 0 metrics.register_callback( "transport_kernel_send_buffer", lambda: { (p.name, p.conn_id): transport_kernel_read_buffer_size(p, False) for p in connected_connections }, labels=["name", "conn_id"], ) metrics.register_callback( "transport_kernel_read_buffer", lambda: { (p.name, p.conn_id): transport_kernel_read_buffer_size(p, True) for p in connected_connections }, labels=["name", "conn_id"], ) metrics.register_callback( "inbound_commands", lambda: { (k[0], p.name, p.conn_id): count for p in connected_connections for k, count in p.inbound_commands_counter.counts.iteritems() }, labels=["command", "name", "conn_id"], ) metrics.register_callback( "outbound_commands", lambda: { (k[0], p.name, p.conn_id): count for p in connected_connections for k, count in p.outbound_commands_counter.counts.iteritems() }, labels=["command", "name", "conn_id"], ) synapse-0.24.0/synapse/replication/tcp/resource.py000066400000000000000000000255261317335640100222560ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """The server side of the replication stream. """ from twisted.internet import defer, reactor from twisted.internet.protocol import Factory from streams import STREAMS_MAP, FederationStream from protocol import ServerReplicationStreamProtocol from synapse.util.metrics import Measure, measure_func import logging import synapse.metrics metrics = synapse.metrics.get_metrics_for(__name__) stream_updates_counter = metrics.register_counter( "stream_updates", labels=["stream_name"] ) user_sync_counter = metrics.register_counter("user_sync") federation_ack_counter = metrics.register_counter("federation_ack") remove_pusher_counter = metrics.register_counter("remove_pusher") invalidate_cache_counter = metrics.register_counter("invalidate_cache") user_ip_cache_counter = metrics.register_counter("user_ip_cache") logger = logging.getLogger(__name__) class ReplicationStreamProtocolFactory(Factory): """Factory for new replication connections. """ def __init__(self, hs): self.streamer = ReplicationStreamer(hs) self.clock = hs.get_clock() self.server_name = hs.config.server_name def buildProtocol(self, addr): return ServerReplicationStreamProtocol( self.server_name, self.clock, self.streamer, addr ) class ReplicationStreamer(object): """Handles replication connections. This needs to be poked when new replication data may be available. When new data is available it will propagate to all connected clients. """ def __init__(self, hs): self.store = hs.get_datastore() self.presence_handler = hs.get_presence_handler() self.clock = hs.get_clock() self.notifier = hs.get_notifier() # Current connections. self.connections = [] metrics.register_callback("total_connections", lambda: len(self.connections)) # List of streams that clients can subscribe to. # We only support federation stream if federation sending hase been # disabled on the master. self.streams = [ stream(hs) for stream in STREAMS_MAP.itervalues() if stream != FederationStream or not hs.config.send_federation ] self.streams_by_name = {stream.NAME: stream for stream in self.streams} metrics.register_callback( "connections_per_stream", lambda: { (stream_name,): len([ conn for conn in self.connections if stream_name in conn.replication_streams ]) for stream_name in self.streams_by_name }, labels=["stream_name"], ) self.federation_sender = None if not hs.config.send_federation: self.federation_sender = hs.get_federation_sender() self.notifier.add_replication_callback(self.on_notifier_poke) # Keeps track of whether we are currently checking for updates self.is_looping = False self.pending_updates = False reactor.addSystemEventTrigger("before", "shutdown", self.on_shutdown) def on_shutdown(self): # close all connections on shutdown for conn in self.connections: conn.send_error("server shutting down") @defer.inlineCallbacks def on_notifier_poke(self): """Checks if there is actually any new data and sends it to the connections if there are. This should get called each time new data is available, even if it is currently being executed, so that nothing gets missed """ if not self.connections: # Don't bother if nothing is listening. We still need to advance # the stream tokens otherwise they'll fall beihind forever for stream in self.streams: stream.discard_updates_and_advance() return # If we're in the process of checking for new updates, mark that fact # and return if self.is_looping: logger.debug("Noitifier poke loop already running") self.pending_updates = True return self.pending_updates = True self.is_looping = True try: # Keep looping while there have been pokes about potential updates. # This protects against the race where a stream we already checked # gets an update while we're handling other streams. while self.pending_updates: self.pending_updates = False with Measure(self.clock, "repl.stream.get_updates"): # First we tell the streams that they should update their # current tokens. for stream in self.streams: stream.advance_current_token() for stream in self.streams: if stream.last_token == stream.upto_token: continue logger.debug( "Getting stream: %s: %s -> %s", stream.NAME, stream.last_token, stream.upto_token ) try: updates, current_token = yield stream.get_updates() except: logger.info("Failed to handle stream %s", stream.NAME) raise logger.debug( "Sending %d updates to %d connections", len(updates), len(self.connections), ) if updates: logger.info( "Streaming: %s -> %s", stream.NAME, updates[-1][0] ) stream_updates_counter.inc_by(len(updates), stream.NAME) # Some streams return multiple rows with the same stream IDs, # we need to make sure they get sent out in batches. We do # this by setting the current token to all but the last of # a series of updates with the same token to have a None # token. See RdataCommand for more details. batched_updates = _batch_updates(updates) for conn in self.connections: for token, row in batched_updates: try: conn.stream_update(stream.NAME, token, row) except Exception: logger.exception("Failed to replicate") logger.debug("No more pending updates, breaking poke loop") finally: self.pending_updates = False self.is_looping = False @measure_func("repl.get_stream_updates") def get_stream_updates(self, stream_name, token): """For a given stream get all updates since token. This is called when a client first subscribes to a stream. """ stream = self.streams_by_name.get(stream_name, None) if not stream: raise Exception("unknown stream %s", stream_name) return stream.get_updates_since(token) @measure_func("repl.federation_ack") def federation_ack(self, token): """We've received an ack for federation stream from a client. """ federation_ack_counter.inc() if self.federation_sender: self.federation_sender.federation_ack(token) @measure_func("repl.on_user_sync") def on_user_sync(self, conn_id, user_id, is_syncing, last_sync_ms): """A client has started/stopped syncing on a worker. """ user_sync_counter.inc() self.presence_handler.update_external_syncs_row( conn_id, user_id, is_syncing, last_sync_ms, ) @measure_func("repl.on_remove_pusher") @defer.inlineCallbacks def on_remove_pusher(self, app_id, push_key, user_id): """A client has asked us to remove a pusher """ remove_pusher_counter.inc() yield self.store.delete_pusher_by_app_id_pushkey_user_id( app_id=app_id, pushkey=push_key, user_id=user_id ) self.notifier.on_new_replication_data() @measure_func("repl.on_invalidate_cache") def on_invalidate_cache(self, cache_func, keys): """The client has asked us to invalidate a cache """ invalidate_cache_counter.inc() getattr(self.store, cache_func).invalidate(tuple(keys)) @measure_func("repl.on_user_ip") def on_user_ip(self, user_id, access_token, ip, user_agent, device_id, last_seen): """The client saw a user request """ user_ip_cache_counter.inc() self.store.insert_client_ip( user_id, access_token, ip, user_agent, device_id, last_seen, ) def send_sync_to_all_connections(self, data): """Sends a SYNC command to all clients. Used in tests. """ for conn in self.connections: conn.send_sync(data) def new_connection(self, connection): """A new client connection has been established """ self.connections.append(connection) def lost_connection(self, connection): """A client connection has been lost """ try: self.connections.remove(connection) except ValueError: pass # We need to tell the presence handler that the connection has been # lost so that it can handle any ongoing syncs on that connection. self.presence_handler.update_external_syncs_clear(connection.conn_id) def _batch_updates(updates): """Takes a list of updates of form [(token, row)] and sets the token to None for all rows where the next row has the same token. This is used to implement batching. For example: [(1, _), (1, _), (2, _), (3, _), (3, _)] becomes: [(None, _), (1, _), (2, _), (None, _), (3, _)] """ if not updates: return [] new_updates = [] for i, update in enumerate(updates[:-1]): if update[0] == updates[i + 1][0]: new_updates.append((None, update[1])) else: new_updates.append(update) new_updates.append(updates[-1]) return new_updates synapse-0.24.0/synapse/replication/tcp/streams.py000066400000000000000000000347171317335640100221070ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Defines all the valid streams that clients can subscribe to, and the format of the rows returned by each stream. Each stream is defined by the following information: stream name: The name of the stream row type: The type that is used to serialise/deserialse the row current_token: The function that returns the current token for the stream update_function: The function that returns a list of updates between two tokens """ from twisted.internet import defer from collections import namedtuple import logging logger = logging.getLogger(__name__) MAX_EVENTS_BEHIND = 10000 EventStreamRow = namedtuple("EventStreamRow", ( "event_id", # str "room_id", # str "type", # str "state_key", # str, optional "redacts", # str, optional )) BackfillStreamRow = namedtuple("BackfillStreamRow", ( "event_id", # str "room_id", # str "type", # str "state_key", # str, optional "redacts", # str, optional )) PresenceStreamRow = namedtuple("PresenceStreamRow", ( "user_id", # str "state", # str "last_active_ts", # int "last_federation_update_ts", # int "last_user_sync_ts", # int "status_msg", # str "currently_active", # bool )) TypingStreamRow = namedtuple("TypingStreamRow", ( "room_id", # str "user_ids", # list(str) )) ReceiptsStreamRow = namedtuple("ReceiptsStreamRow", ( "room_id", # str "receipt_type", # str "user_id", # str "event_id", # str "data", # dict )) PushRulesStreamRow = namedtuple("PushRulesStreamRow", ( "user_id", # str )) PushersStreamRow = namedtuple("PushersStreamRow", ( "user_id", # str "app_id", # str "pushkey", # str "deleted", # bool )) CachesStreamRow = namedtuple("CachesStreamRow", ( "cache_func", # str "keys", # list(str) "invalidation_ts", # int )) PublicRoomsStreamRow = namedtuple("PublicRoomsStreamRow", ( "room_id", # str "visibility", # str "appservice_id", # str, optional "network_id", # str, optional )) DeviceListsStreamRow = namedtuple("DeviceListsStreamRow", ( "user_id", # str "destination", # str )) ToDeviceStreamRow = namedtuple("ToDeviceStreamRow", ( "entity", # str )) FederationStreamRow = namedtuple("FederationStreamRow", ( "type", # str, the type of data as defined in the BaseFederationRows "data", # dict, serialization of a federation.send_queue.BaseFederationRow )) TagAccountDataStreamRow = namedtuple("TagAccountDataStreamRow", ( "user_id", # str "room_id", # str "data", # dict )) AccountDataStreamRow = namedtuple("AccountDataStream", ( "user_id", # str "room_id", # str "data_type", # str "data", # dict )) CurrentStateDeltaStreamRow = namedtuple("CurrentStateDeltaStream", ( "room_id", # str "type", # str "state_key", # str "event_id", # str, optional )) GroupsStreamRow = namedtuple("GroupsStreamRow", ( "group_id", # str "user_id", # str "type", # str "content", # dict )) class Stream(object): """Base class for the streams. Provides a `get_updates()` function that returns new updates since the last time it was called up until the point `advance_current_token` was called. """ NAME = None # The name of the stream ROW_TYPE = None # The type of the row _LIMITED = True # Whether the update function takes a limit def __init__(self, hs): # The token from which we last asked for updates self.last_token = self.current_token() # The token that we will get updates up to self.upto_token = self.current_token() def advance_current_token(self): """Updates `upto_token` to "now", which updates up until which point get_updates[_since] will fetch rows till. """ self.upto_token = self.current_token() def discard_updates_and_advance(self): """Called when the stream should advance but the updates would be discarded, e.g. when there are no currently connected workers. """ self.upto_token = self.current_token() self.last_token = self.upto_token @defer.inlineCallbacks def get_updates(self): """Gets all updates since the last time this function was called (or since the stream was constructed if it hadn't been called before), until the `upto_token` Returns: (list(ROW_TYPE), int): list of updates plus the token used as an upper bound of the updates (i.e. the "current token") """ updates, current_token = yield self.get_updates_since(self.last_token) self.last_token = current_token defer.returnValue((updates, current_token)) @defer.inlineCallbacks def get_updates_since(self, from_token): """Like get_updates except allows specifying from when we should stream updates Returns: (list(ROW_TYPE), int): list of updates plus the token used as an upper bound of the updates (i.e. the "current token") """ if from_token in ("NOW", "now"): defer.returnValue(([], self.upto_token)) current_token = self.upto_token from_token = int(from_token) if from_token == current_token: defer.returnValue(([], current_token)) if self._LIMITED: rows = yield self.update_function( from_token, current_token, limit=MAX_EVENTS_BEHIND + 1, ) if len(rows) >= MAX_EVENTS_BEHIND: raise Exception("stream %s has fallen behined" % (self.NAME)) else: rows = yield self.update_function( from_token, current_token, ) updates = [(row[0], self.ROW_TYPE(*row[1:])) for row in rows] defer.returnValue((updates, current_token)) def current_token(self): """Gets the current token of the underlying streams. Should be provided by the sub classes Returns: int """ raise NotImplementedError() def update_function(self, from_token, current_token, limit=None): """Get updates between from_token and to_token. If Stream._LIMITED is True then limit is provided, otherwise it's not. Returns: Deferred(list(tuple)): the first entry in the tuple is the token for that update, and the rest of the tuple gets used to construct a ``ROW_TYPE`` instance """ raise NotImplementedError() class EventsStream(Stream): """We received a new event, or an event went from being an outlier to not """ NAME = "events" ROW_TYPE = EventStreamRow def __init__(self, hs): store = hs.get_datastore() self.current_token = store.get_current_events_token self.update_function = store.get_all_new_forward_event_rows super(EventsStream, self).__init__(hs) class BackfillStream(Stream): """We fetched some old events and either we had never seen that event before or it went from being an outlier to not. """ NAME = "backfill" ROW_TYPE = BackfillStreamRow def __init__(self, hs): store = hs.get_datastore() self.current_token = store.get_current_backfill_token self.update_function = store.get_all_new_backfill_event_rows super(BackfillStream, self).__init__(hs) class PresenceStream(Stream): NAME = "presence" _LIMITED = False ROW_TYPE = PresenceStreamRow def __init__(self, hs): store = hs.get_datastore() presence_handler = hs.get_presence_handler() self.current_token = store.get_current_presence_token self.update_function = presence_handler.get_all_presence_updates super(PresenceStream, self).__init__(hs) class TypingStream(Stream): NAME = "typing" _LIMITED = False ROW_TYPE = TypingStreamRow def __init__(self, hs): typing_handler = hs.get_typing_handler() self.current_token = typing_handler.get_current_token self.update_function = typing_handler.get_all_typing_updates super(TypingStream, self).__init__(hs) class ReceiptsStream(Stream): NAME = "receipts" ROW_TYPE = ReceiptsStreamRow def __init__(self, hs): store = hs.get_datastore() self.current_token = store.get_max_receipt_stream_id self.update_function = store.get_all_updated_receipts super(ReceiptsStream, self).__init__(hs) class PushRulesStream(Stream): """A user has changed their push rules """ NAME = "push_rules" ROW_TYPE = PushRulesStreamRow def __init__(self, hs): self.store = hs.get_datastore() super(PushRulesStream, self).__init__(hs) def current_token(self): push_rules_token, _ = self.store.get_push_rules_stream_token() return push_rules_token @defer.inlineCallbacks def update_function(self, from_token, to_token, limit): rows = yield self.store.get_all_push_rule_updates(from_token, to_token, limit) defer.returnValue([(row[0], row[2]) for row in rows]) class PushersStream(Stream): """A user has added/changed/removed a pusher """ NAME = "pushers" ROW_TYPE = PushersStreamRow def __init__(self, hs): store = hs.get_datastore() self.current_token = store.get_pushers_stream_token self.update_function = store.get_all_updated_pushers_rows super(PushersStream, self).__init__(hs) class CachesStream(Stream): """A cache was invalidated on the master and no other stream would invalidate the cache on the workers """ NAME = "caches" ROW_TYPE = CachesStreamRow def __init__(self, hs): store = hs.get_datastore() self.current_token = store.get_cache_stream_token self.update_function = store.get_all_updated_caches super(CachesStream, self).__init__(hs) class PublicRoomsStream(Stream): """The public rooms list changed """ NAME = "public_rooms" ROW_TYPE = PublicRoomsStreamRow def __init__(self, hs): store = hs.get_datastore() self.current_token = store.get_current_public_room_stream_id self.update_function = store.get_all_new_public_rooms super(PublicRoomsStream, self).__init__(hs) class DeviceListsStream(Stream): """Someone added/changed/removed a device """ NAME = "device_lists" _LIMITED = False ROW_TYPE = DeviceListsStreamRow def __init__(self, hs): store = hs.get_datastore() self.current_token = store.get_device_stream_token self.update_function = store.get_all_device_list_changes_for_remotes super(DeviceListsStream, self).__init__(hs) class ToDeviceStream(Stream): """New to_device messages for a client """ NAME = "to_device" ROW_TYPE = ToDeviceStreamRow def __init__(self, hs): store = hs.get_datastore() self.current_token = store.get_to_device_stream_token self.update_function = store.get_all_new_device_messages super(ToDeviceStream, self).__init__(hs) class FederationStream(Stream): """Data to be sent over federation. Only available when master has federation sending disabled. """ NAME = "federation" ROW_TYPE = FederationStreamRow def __init__(self, hs): federation_sender = hs.get_federation_sender() self.current_token = federation_sender.get_current_token self.update_function = federation_sender.get_replication_rows super(FederationStream, self).__init__(hs) class TagAccountDataStream(Stream): """Someone added/removed a tag for a room """ NAME = "tag_account_data" ROW_TYPE = TagAccountDataStreamRow def __init__(self, hs): store = hs.get_datastore() self.current_token = store.get_max_account_data_stream_id self.update_function = store.get_all_updated_tags super(TagAccountDataStream, self).__init__(hs) class AccountDataStream(Stream): """Global or per room account data was changed """ NAME = "account_data" ROW_TYPE = AccountDataStreamRow def __init__(self, hs): self.store = hs.get_datastore() self.current_token = self.store.get_max_account_data_stream_id super(AccountDataStream, self).__init__(hs) @defer.inlineCallbacks def update_function(self, from_token, to_token, limit): global_results, room_results = yield self.store.get_all_updated_account_data( from_token, from_token, to_token, limit ) results = list(room_results) results.extend( (stream_id, user_id, None, account_data_type, content,) for stream_id, user_id, account_data_type, content in global_results ) defer.returnValue(results) class CurrentStateDeltaStream(Stream): """Current state for a room was changed """ NAME = "current_state_deltas" ROW_TYPE = CurrentStateDeltaStreamRow def __init__(self, hs): store = hs.get_datastore() self.current_token = store.get_max_current_state_delta_stream_id self.update_function = store.get_all_updated_current_state_deltas super(CurrentStateDeltaStream, self).__init__(hs) class GroupServerStream(Stream): NAME = "groups" ROW_TYPE = GroupsStreamRow def __init__(self, hs): store = hs.get_datastore() self.current_token = store.get_group_stream_token self.update_function = store.get_all_groups_changes super(GroupServerStream, self).__init__(hs) STREAMS_MAP = { stream.NAME: stream for stream in ( EventsStream, BackfillStream, PresenceStream, TypingStream, ReceiptsStream, PushRulesStream, PushersStream, CachesStream, PublicRoomsStream, DeviceListsStream, ToDeviceStream, FederationStream, TagAccountDataStream, AccountDataStream, CurrentStateDeltaStream, GroupServerStream, ) } synapse-0.24.0/synapse/rest/000077500000000000000000000000001317335640100157215ustar00rootroot00000000000000synapse-0.24.0/synapse/rest/__init__.py000066400000000000000000000065531317335640100200430ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.rest.client import ( versions, ) from synapse.rest.client.v1 import ( room, events, profile, presence, initial_sync, directory, voip, admin, pusher, push_rule, register as v1_register, login as v1_login, logout, ) from synapse.rest.client.v2_alpha import ( sync, filter, account, register, auth, receipts, read_marker, keys, tokenrefresh, tags, account_data, report_event, openid, notifications, devices, thirdparty, sendtodevice, user_directory, groups, ) from synapse.http.server import JsonResource class ClientRestResource(JsonResource): """A resource for version 1 of the matrix client API.""" def __init__(self, hs): JsonResource.__init__(self, hs, canonical_json=False) self.register_servlets(self, hs) @staticmethod def register_servlets(client_resource, hs): versions.register_servlets(client_resource) # "v1" room.register_servlets(hs, client_resource) events.register_servlets(hs, client_resource) v1_register.register_servlets(hs, client_resource) v1_login.register_servlets(hs, client_resource) profile.register_servlets(hs, client_resource) presence.register_servlets(hs, client_resource) initial_sync.register_servlets(hs, client_resource) directory.register_servlets(hs, client_resource) voip.register_servlets(hs, client_resource) admin.register_servlets(hs, client_resource) pusher.register_servlets(hs, client_resource) push_rule.register_servlets(hs, client_resource) logout.register_servlets(hs, client_resource) # "v2" sync.register_servlets(hs, client_resource) filter.register_servlets(hs, client_resource) account.register_servlets(hs, client_resource) register.register_servlets(hs, client_resource) auth.register_servlets(hs, client_resource) receipts.register_servlets(hs, client_resource) read_marker.register_servlets(hs, client_resource) keys.register_servlets(hs, client_resource) tokenrefresh.register_servlets(hs, client_resource) tags.register_servlets(hs, client_resource) account_data.register_servlets(hs, client_resource) report_event.register_servlets(hs, client_resource) openid.register_servlets(hs, client_resource) notifications.register_servlets(hs, client_resource) devices.register_servlets(hs, client_resource) thirdparty.register_servlets(hs, client_resource) sendtodevice.register_servlets(hs, client_resource) user_directory.register_servlets(hs, client_resource) groups.register_servlets(hs, client_resource) synapse-0.24.0/synapse/rest/client/000077500000000000000000000000001317335640100171775ustar00rootroot00000000000000synapse-0.24.0/synapse/rest/client/__init__.py000066400000000000000000000011401317335640100213040ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. synapse-0.24.0/synapse/rest/client/transactions.py000066400000000000000000000102141317335640100222570ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module contains logic for storing HTTP PUT transactions. This is used to ensure idempotency when performing PUTs using the REST API.""" import logging from synapse.api.auth import get_access_token_from_request from synapse.util.async import ObservableDeferred logger = logging.getLogger(__name__) def get_transaction_key(request): """A helper function which returns a transaction key that can be used with TransactionCache for idempotent requests. Idempotency is based on the returned key being the same for separate requests to the same endpoint. The key is formed from the HTTP request path and the access_token for the requesting user. Args: request (twisted.web.http.Request): The incoming request. Must contain an access_token. Returns: str: A transaction key """ token = get_access_token_from_request(request) return request.path + "/" + token CLEANUP_PERIOD_MS = 1000 * 60 * 30 # 30 mins class HttpTransactionCache(object): def __init__(self, clock): self.clock = clock self.transactions = { # $txn_key: (ObservableDeferred<(res_code, res_json_body)>, timestamp) } # Try to clean entries every 30 mins. This means entries will exist # for at *LEAST* 30 mins, and at *MOST* 60 mins. self.cleaner = self.clock.looping_call(self._cleanup, CLEANUP_PERIOD_MS) def fetch_or_execute_request(self, request, fn, *args, **kwargs): """A helper function for fetch_or_execute which extracts a transaction key from the given request. See: fetch_or_execute """ return self.fetch_or_execute( get_transaction_key(request), fn, *args, **kwargs ) def fetch_or_execute(self, txn_key, fn, *args, **kwargs): """Fetches the response for this transaction, or executes the given function to produce a response for this transaction. Args: txn_key (str): A key to ensure idempotency should fetch_or_execute be called again at a later point in time. fn (function): A function which returns a tuple of (response_code, response_dict). *args: Arguments to pass to fn. **kwargs: Keyword arguments to pass to fn. Returns: Deferred which resolves to a tuple of (response_code, response_dict). """ try: return self.transactions[txn_key][0].observe() except (KeyError, IndexError): pass # execute the function instead. deferred = fn(*args, **kwargs) # if the request fails with a Twisted failure, remove it # from the transaction map. This is done to ensure that we don't # cache transient errors like rate-limiting errors, etc. def remove_from_map(err): self.transactions.pop(txn_key, None) return err deferred.addErrback(remove_from_map) # We don't add any other errbacks to the raw deferred, so we ask # ObservableDeferred to swallow the error. This is fine as the error will # still be reported to the observers. observable = ObservableDeferred(deferred, consumeErrors=True) self.transactions[txn_key] = (observable, self.clock.time_msec()) return observable.observe() def _cleanup(self): now = self.clock.time_msec() for key in self.transactions.keys(): ts = self.transactions[key][1] if now > (ts + CLEANUP_PERIOD_MS): # after cleanup period del self.transactions[key] synapse-0.24.0/synapse/rest/client/v1/000077500000000000000000000000001317335640100175255ustar00rootroot00000000000000synapse-0.24.0/synapse/rest/client/v1/__init__.py000066400000000000000000000011371317335640100216400ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. synapse-0.24.0/synapse/rest/client/v1/admin.py000066400000000000000000000430101317335640100211650ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.constants import Membership from synapse.api.errors import AuthError, SynapseError from synapse.types import UserID, create_requester from synapse.http.servlet import parse_json_object_from_request from .base import ClientV1RestServlet, client_path_patterns import logging logger = logging.getLogger(__name__) class UsersRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/admin/users/(?P[^/]*)") def __init__(self, hs): super(UsersRestServlet, self).__init__(hs) self.handlers = hs.get_handlers() @defer.inlineCallbacks def on_GET(self, request, user_id): target_user = UserID.from_string(user_id) requester = yield self.auth.get_user_by_req(request) is_admin = yield self.auth.is_server_admin(requester.user) if not is_admin: raise AuthError(403, "You are not a server admin") # To allow all users to get the users list # if not is_admin and target_user != auth_user: # raise AuthError(403, "You are not a server admin") if not self.hs.is_mine(target_user): raise SynapseError(400, "Can only users a local user") ret = yield self.handlers.admin_handler.get_users() defer.returnValue((200, ret)) class WhoisRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/admin/whois/(?P[^/]*)") def __init__(self, hs): super(WhoisRestServlet, self).__init__(hs) self.handlers = hs.get_handlers() @defer.inlineCallbacks def on_GET(self, request, user_id): target_user = UserID.from_string(user_id) requester = yield self.auth.get_user_by_req(request) auth_user = requester.user is_admin = yield self.auth.is_server_admin(requester.user) if not is_admin and target_user != auth_user: raise AuthError(403, "You are not a server admin") if not self.hs.is_mine(target_user): raise SynapseError(400, "Can only whois a local user") ret = yield self.handlers.admin_handler.get_whois(target_user) defer.returnValue((200, ret)) class PurgeMediaCacheRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/admin/purge_media_cache") def __init__(self, hs): self.media_repository = hs.get_media_repository() super(PurgeMediaCacheRestServlet, self).__init__(hs) @defer.inlineCallbacks def on_POST(self, request): requester = yield self.auth.get_user_by_req(request) is_admin = yield self.auth.is_server_admin(requester.user) if not is_admin: raise AuthError(403, "You are not a server admin") before_ts = request.args.get("before_ts", None) if not before_ts: raise SynapseError(400, "Missing 'before_ts' arg") logger.info("before_ts: %r", before_ts[0]) try: before_ts = int(before_ts[0]) except Exception: raise SynapseError(400, "Invalid 'before_ts' arg") ret = yield self.media_repository.delete_old_remote_media(before_ts) defer.returnValue((200, ret)) class PurgeHistoryRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns( "/admin/purge_history/(?P[^/]*)/(?P[^/]*)" ) def __init__(self, hs): super(PurgeHistoryRestServlet, self).__init__(hs) self.handlers = hs.get_handlers() @defer.inlineCallbacks def on_POST(self, request, room_id, event_id): requester = yield self.auth.get_user_by_req(request) is_admin = yield self.auth.is_server_admin(requester.user) if not is_admin: raise AuthError(403, "You are not a server admin") yield self.handlers.message_handler.purge_history(room_id, event_id) defer.returnValue((200, {})) class DeactivateAccountRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/admin/deactivate/(?P[^/]*)") def __init__(self, hs): self.store = hs.get_datastore() super(DeactivateAccountRestServlet, self).__init__(hs) @defer.inlineCallbacks def on_POST(self, request, target_user_id): UserID.from_string(target_user_id) requester = yield self.auth.get_user_by_req(request) is_admin = yield self.auth.is_server_admin(requester.user) if not is_admin: raise AuthError(403, "You are not a server admin") # FIXME: Theoretically there is a race here wherein user resets password # using threepid. yield self.store.user_delete_access_tokens(target_user_id) yield self.store.user_delete_threepids(target_user_id) yield self.store.user_set_password_hash(target_user_id, None) defer.returnValue((200, {})) class ShutdownRoomRestServlet(ClientV1RestServlet): """Shuts down a room by removing all local users from the room and blocking all future invites and joins to the room. Any local aliases will be repointed to a new room created by `new_room_user_id` and kicked users will be auto joined to the new room. """ PATTERNS = client_path_patterns("/admin/shutdown_room/(?P[^/]+)") DEFAULT_MESSAGE = ( "Sharing illegal content on this server is not permitted and rooms in" " violation will be blocked." ) def __init__(self, hs): super(ShutdownRoomRestServlet, self).__init__(hs) self.store = hs.get_datastore() self.handlers = hs.get_handlers() self.state = hs.get_state_handler() @defer.inlineCallbacks def on_POST(self, request, room_id): requester = yield self.auth.get_user_by_req(request) is_admin = yield self.auth.is_server_admin(requester.user) if not is_admin: raise AuthError(403, "You are not a server admin") content = parse_json_object_from_request(request) new_room_user_id = content.get("new_room_user_id") if not new_room_user_id: raise SynapseError(400, "Please provide field `new_room_user_id`") room_creator_requester = create_requester(new_room_user_id) message = content.get("message", self.DEFAULT_MESSAGE) room_name = content.get("room_name", "Content Violation Notification") info = yield self.handlers.room_creation_handler.create_room( room_creator_requester, config={ "preset": "public_chat", "name": room_name, "power_level_content_override": { "users_default": -10, }, }, ratelimit=False, ) new_room_id = info["room_id"] msg_handler = self.handlers.message_handler yield msg_handler.create_and_send_nonmember_event( room_creator_requester, { "type": "m.room.message", "content": {"body": message, "msgtype": "m.text"}, "room_id": new_room_id, "sender": new_room_user_id, }, ratelimit=False, ) requester_user_id = requester.user.to_string() logger.info("Shutting down room %r", room_id) yield self.store.block_room(room_id, requester_user_id) users = yield self.state.get_current_user_in_room(room_id) kicked_users = [] for user_id in users: if not self.hs.is_mine_id(user_id): continue logger.info("Kicking %r from %r...", user_id, room_id) target_requester = create_requester(user_id) yield self.handlers.room_member_handler.update_membership( requester=target_requester, target=target_requester.user, room_id=room_id, action=Membership.LEAVE, content={}, ratelimit=False ) yield self.handlers.room_member_handler.forget(target_requester.user, room_id) yield self.handlers.room_member_handler.update_membership( requester=target_requester, target=target_requester.user, room_id=new_room_id, action=Membership.JOIN, content={}, ratelimit=False ) kicked_users.append(user_id) aliases_for_room = yield self.store.get_aliases_for_room(room_id) yield self.store.update_aliases_for_room( room_id, new_room_id, requester_user_id ) defer.returnValue((200, { "kicked_users": kicked_users, "local_aliases": aliases_for_room, "new_room_id": new_room_id, })) class QuarantineMediaInRoom(ClientV1RestServlet): """Quarantines all media in a room so that no one can download it via this server. """ PATTERNS = client_path_patterns("/admin/quarantine_media/(?P[^/]+)") def __init__(self, hs): super(QuarantineMediaInRoom, self).__init__(hs) self.store = hs.get_datastore() @defer.inlineCallbacks def on_POST(self, request, room_id): requester = yield self.auth.get_user_by_req(request) is_admin = yield self.auth.is_server_admin(requester.user) if not is_admin: raise AuthError(403, "You are not a server admin") num_quarantined = yield self.store.quarantine_media_ids_in_room( room_id, requester.user.to_string(), ) defer.returnValue((200, {"num_quarantined": num_quarantined})) class ResetPasswordRestServlet(ClientV1RestServlet): """Post request to allow an administrator reset password for a user. This needs user to have administrator access in Synapse. Example: http://localhost:8008/_matrix/client/api/v1/admin/reset_password/ @user:to_reset_password?access_token=admin_access_token JsonBodyToSend: { "new_password": "secret" } Returns: 200 OK with empty object if success otherwise an error. """ PATTERNS = client_path_patterns("/admin/reset_password/(?P[^/]*)") def __init__(self, hs): self.store = hs.get_datastore() super(ResetPasswordRestServlet, self).__init__(hs) self.hs = hs self.auth = hs.get_auth() self.auth_handler = hs.get_auth_handler() @defer.inlineCallbacks def on_POST(self, request, target_user_id): """Post request to allow an administrator reset password for a user. This needs user to have administrator access in Synapse. """ UserID.from_string(target_user_id) requester = yield self.auth.get_user_by_req(request) is_admin = yield self.auth.is_server_admin(requester.user) if not is_admin: raise AuthError(403, "You are not a server admin") params = parse_json_object_from_request(request) new_password = params['new_password'] if not new_password: raise SynapseError(400, "Missing 'new_password' arg") logger.info("new_password: %r", new_password) yield self.auth_handler.set_password( target_user_id, new_password, requester ) defer.returnValue((200, {})) class GetUsersPaginatedRestServlet(ClientV1RestServlet): """Get request to get specific number of users from Synapse. This needs user to have administrator access in Synapse. Example: http://localhost:8008/_matrix/client/api/v1/admin/users_paginate/ @admin:user?access_token=admin_access_token&start=0&limit=10 Returns: 200 OK with json object {list[dict[str, Any]], count} or empty object. """ PATTERNS = client_path_patterns("/admin/users_paginate/(?P[^/]*)") def __init__(self, hs): self.store = hs.get_datastore() super(GetUsersPaginatedRestServlet, self).__init__(hs) self.hs = hs self.auth = hs.get_auth() self.handlers = hs.get_handlers() @defer.inlineCallbacks def on_GET(self, request, target_user_id): """Get request to get specific number of users from Synapse. This needs user to have administrator access in Synapse. """ target_user = UserID.from_string(target_user_id) requester = yield self.auth.get_user_by_req(request) is_admin = yield self.auth.is_server_admin(requester.user) if not is_admin: raise AuthError(403, "You are not a server admin") # To allow all users to get the users list # if not is_admin and target_user != auth_user: # raise AuthError(403, "You are not a server admin") if not self.hs.is_mine(target_user): raise SynapseError(400, "Can only users a local user") order = "name" # order by name in user table start = request.args.get("start")[0] limit = request.args.get("limit")[0] if not limit: raise SynapseError(400, "Missing 'limit' arg") if not start: raise SynapseError(400, "Missing 'start' arg") logger.info("limit: %s, start: %s", limit, start) ret = yield self.handlers.admin_handler.get_users_paginate( order, start, limit ) defer.returnValue((200, ret)) @defer.inlineCallbacks def on_POST(self, request, target_user_id): """Post request to get specific number of users from Synapse.. This needs user to have administrator access in Synapse. Example: http://localhost:8008/_matrix/client/api/v1/admin/users_paginate/ @admin:user?access_token=admin_access_token JsonBodyToSend: { "start": "0", "limit": "10 } Returns: 200 OK with json object {list[dict[str, Any]], count} or empty object. """ UserID.from_string(target_user_id) requester = yield self.auth.get_user_by_req(request) is_admin = yield self.auth.is_server_admin(requester.user) if not is_admin: raise AuthError(403, "You are not a server admin") order = "name" # order by name in user table params = parse_json_object_from_request(request) limit = params['limit'] start = params['start'] if not limit: raise SynapseError(400, "Missing 'limit' arg") if not start: raise SynapseError(400, "Missing 'start' arg") logger.info("limit: %s, start: %s", limit, start) ret = yield self.handlers.admin_handler.get_users_paginate( order, start, limit ) defer.returnValue((200, ret)) class SearchUsersRestServlet(ClientV1RestServlet): """Get request to search user table for specific users according to search term. This needs user to have administrator access in Synapse. Example: http://localhost:8008/_matrix/client/api/v1/admin/search_users/ @admin:user?access_token=admin_access_token&term=alice Returns: 200 OK with json object {list[dict[str, Any]], count} or empty object. """ PATTERNS = client_path_patterns("/admin/search_users/(?P[^/]*)") def __init__(self, hs): self.store = hs.get_datastore() super(SearchUsersRestServlet, self).__init__(hs) self.hs = hs self.auth = hs.get_auth() self.handlers = hs.get_handlers() @defer.inlineCallbacks def on_GET(self, request, target_user_id): """Get request to search user table for specific users according to search term. This needs user to have a administrator access in Synapse. """ target_user = UserID.from_string(target_user_id) requester = yield self.auth.get_user_by_req(request) is_admin = yield self.auth.is_server_admin(requester.user) if not is_admin: raise AuthError(403, "You are not a server admin") # To allow all users to get the users list # if not is_admin and target_user != auth_user: # raise AuthError(403, "You are not a server admin") if not self.hs.is_mine(target_user): raise SynapseError(400, "Can only users a local user") term = request.args.get("term")[0] if not term: raise SynapseError(400, "Missing 'term' arg") logger.info("term: %s ", term) ret = yield self.handlers.admin_handler.search_users( term ) defer.returnValue((200, ret)) def register_servlets(hs, http_server): WhoisRestServlet(hs).register(http_server) PurgeMediaCacheRestServlet(hs).register(http_server) DeactivateAccountRestServlet(hs).register(http_server) PurgeHistoryRestServlet(hs).register(http_server) UsersRestServlet(hs).register(http_server) ResetPasswordRestServlet(hs).register(http_server) GetUsersPaginatedRestServlet(hs).register(http_server) SearchUsersRestServlet(hs).register(http_server) ShutdownRoomRestServlet(hs).register(http_server) QuarantineMediaInRoom(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v1/base.py000066400000000000000000000040221317335640100210070ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module contains base REST classes for constructing client v1 servlets. """ from synapse.http.servlet import RestServlet from synapse.api.urls import CLIENT_PREFIX from synapse.rest.client.transactions import HttpTransactionCache import re import logging logger = logging.getLogger(__name__) def client_path_patterns(path_regex, releases=(0,), include_in_unstable=True): """Creates a regex compiled client path with the correct client path prefix. Args: path_regex (str): The regex string to match. This should NOT have a ^ as this will be prefixed. Returns: SRE_Pattern """ patterns = [re.compile("^" + CLIENT_PREFIX + path_regex)] if include_in_unstable: unstable_prefix = CLIENT_PREFIX.replace("/api/v1", "/unstable") patterns.append(re.compile("^" + unstable_prefix + path_regex)) for release in releases: new_prefix = CLIENT_PREFIX.replace("/api/v1", "/r%d" % release) patterns.append(re.compile("^" + new_prefix + path_regex)) return patterns class ClientV1RestServlet(RestServlet): """A base Synapse REST Servlet for the client version 1 API. """ def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): """ self.hs = hs self.builder_factory = hs.get_event_builder_factory() self.auth = hs.get_v1auth() self.txns = HttpTransactionCache(hs.get_clock()) synapse-0.24.0/synapse/rest/client/v1/directory.py000066400000000000000000000165031317335640100221100ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.errors import AuthError, SynapseError, Codes from synapse.types import RoomAlias from synapse.http.servlet import parse_json_object_from_request from .base import ClientV1RestServlet, client_path_patterns import logging logger = logging.getLogger(__name__) def register_servlets(hs, http_server): ClientDirectoryServer(hs).register(http_server) ClientDirectoryListServer(hs).register(http_server) ClientAppserviceDirectoryListServer(hs).register(http_server) class ClientDirectoryServer(ClientV1RestServlet): PATTERNS = client_path_patterns("/directory/room/(?P[^/]*)$") def __init__(self, hs): super(ClientDirectoryServer, self).__init__(hs) self.store = hs.get_datastore() self.handlers = hs.get_handlers() @defer.inlineCallbacks def on_GET(self, request, room_alias): room_alias = RoomAlias.from_string(room_alias) dir_handler = self.handlers.directory_handler res = yield dir_handler.get_association(room_alias) defer.returnValue((200, res)) @defer.inlineCallbacks def on_PUT(self, request, room_alias): content = parse_json_object_from_request(request) if "room_id" not in content: raise SynapseError(400, "Missing room_id key", errcode=Codes.BAD_JSON) logger.debug("Got content: %s", content) room_alias = RoomAlias.from_string(room_alias) logger.debug("Got room name: %s", room_alias.to_string()) room_id = content["room_id"] servers = content["servers"] if "servers" in content else None logger.debug("Got room_id: %s", room_id) logger.debug("Got servers: %s", servers) # TODO(erikj): Check types. room = yield self.store.get_room(room_id) if room is None: raise SynapseError(400, "Room does not exist") dir_handler = self.handlers.directory_handler try: # try to auth as a user requester = yield self.auth.get_user_by_req(request) try: user_id = requester.user.to_string() yield dir_handler.create_association( user_id, room_alias, room_id, servers ) yield dir_handler.send_room_alias_update_event( requester, user_id, room_id ) except SynapseError as e: raise e except: logger.exception("Failed to create association") raise except AuthError: # try to auth as an application service service = yield self.auth.get_appservice_by_req(request) yield dir_handler.create_appservice_association( service, room_alias, room_id, servers ) logger.info( "Application service at %s created alias %s pointing to %s", service.url, room_alias.to_string(), room_id ) defer.returnValue((200, {})) @defer.inlineCallbacks def on_DELETE(self, request, room_alias): dir_handler = self.handlers.directory_handler try: service = yield self.auth.get_appservice_by_req(request) room_alias = RoomAlias.from_string(room_alias) yield dir_handler.delete_appservice_association( service, room_alias ) logger.info( "Application service at %s deleted alias %s", service.url, room_alias.to_string() ) defer.returnValue((200, {})) except AuthError: # fallback to default user behaviour if they aren't an AS pass requester = yield self.auth.get_user_by_req(request) user = requester.user room_alias = RoomAlias.from_string(room_alias) yield dir_handler.delete_association( requester, user.to_string(), room_alias ) logger.info( "User %s deleted alias %s", user.to_string(), room_alias.to_string() ) defer.returnValue((200, {})) class ClientDirectoryListServer(ClientV1RestServlet): PATTERNS = client_path_patterns("/directory/list/room/(?P[^/]*)$") def __init__(self, hs): super(ClientDirectoryListServer, self).__init__(hs) self.store = hs.get_datastore() self.handlers = hs.get_handlers() @defer.inlineCallbacks def on_GET(self, request, room_id): room = yield self.store.get_room(room_id) if room is None: raise SynapseError(400, "Unknown room") defer.returnValue((200, { "visibility": "public" if room["is_public"] else "private" })) @defer.inlineCallbacks def on_PUT(self, request, room_id): requester = yield self.auth.get_user_by_req(request) content = parse_json_object_from_request(request) visibility = content.get("visibility", "public") yield self.handlers.directory_handler.edit_published_room_list( requester, room_id, visibility, ) defer.returnValue((200, {})) @defer.inlineCallbacks def on_DELETE(self, request, room_id): requester = yield self.auth.get_user_by_req(request) yield self.handlers.directory_handler.edit_published_room_list( requester, room_id, "private", ) defer.returnValue((200, {})) class ClientAppserviceDirectoryListServer(ClientV1RestServlet): PATTERNS = client_path_patterns( "/directory/list/appservice/(?P[^/]*)/(?P[^/]*)$" ) def __init__(self, hs): super(ClientAppserviceDirectoryListServer, self).__init__(hs) self.store = hs.get_datastore() self.handlers = hs.get_handlers() def on_PUT(self, request, network_id, room_id): content = parse_json_object_from_request(request) visibility = content.get("visibility", "public") return self._edit(request, network_id, room_id, visibility) def on_DELETE(self, request, network_id, room_id): return self._edit(request, network_id, room_id, "private") @defer.inlineCallbacks def _edit(self, request, network_id, room_id, visibility): requester = yield self.auth.get_user_by_req(request) if not requester.app_service: raise AuthError( 403, "Only appservices can edit the appservice published room list" ) yield self.handlers.directory_handler.edit_published_appservice_room_list( requester.app_service.id, network_id, room_id, visibility, ) defer.returnValue((200, {})) synapse-0.24.0/synapse/rest/client/v1/events.py000066400000000000000000000066241317335640100214130ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module contains REST servlets to do with event streaming, /events.""" from twisted.internet import defer from synapse.api.errors import SynapseError from synapse.streams.config import PaginationConfig from .base import ClientV1RestServlet, client_path_patterns from synapse.events.utils import serialize_event import logging logger = logging.getLogger(__name__) class EventStreamRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/events$") DEFAULT_LONGPOLL_TIME_MS = 30000 def __init__(self, hs): super(EventStreamRestServlet, self).__init__(hs) self.event_stream_handler = hs.get_event_stream_handler() @defer.inlineCallbacks def on_GET(self, request): requester = yield self.auth.get_user_by_req( request, allow_guest=True, ) is_guest = requester.is_guest room_id = None if is_guest: if "room_id" not in request.args: raise SynapseError(400, "Guest users must specify room_id param") if "room_id" in request.args: room_id = request.args["room_id"][0] pagin_config = PaginationConfig.from_request(request) timeout = EventStreamRestServlet.DEFAULT_LONGPOLL_TIME_MS if "timeout" in request.args: try: timeout = int(request.args["timeout"][0]) except ValueError: raise SynapseError(400, "timeout must be in milliseconds.") as_client_event = "raw" not in request.args chunk = yield self.event_stream_handler.get_stream( requester.user.to_string(), pagin_config, timeout=timeout, as_client_event=as_client_event, affect_presence=(not is_guest), room_id=room_id, is_guest=is_guest, ) defer.returnValue((200, chunk)) def on_OPTIONS(self, request): return (200, {}) # TODO: Unit test gets, with and without auth, with different kinds of events. class EventRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/events/(?P[^/]*)$") def __init__(self, hs): super(EventRestServlet, self).__init__(hs) self.clock = hs.get_clock() self.event_handler = hs.get_event_handler() @defer.inlineCallbacks def on_GET(self, request, event_id): requester = yield self.auth.get_user_by_req(request) event = yield self.event_handler.get_event(requester.user, event_id) time_now = self.clock.time_msec() if event: defer.returnValue((200, serialize_event(event, time_now))) else: defer.returnValue((404, "Event not found.")) def register_servlets(hs, http_server): EventStreamRestServlet(hs).register(http_server) EventRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v1/initial_sync.py000066400000000000000000000033341317335640100225670ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.streams.config import PaginationConfig from .base import ClientV1RestServlet, client_path_patterns # TODO: Needs unit testing class InitialSyncRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/initialSync$") def __init__(self, hs): super(InitialSyncRestServlet, self).__init__(hs) self.initial_sync_handler = hs.get_initial_sync_handler() @defer.inlineCallbacks def on_GET(self, request): requester = yield self.auth.get_user_by_req(request) as_client_event = "raw" not in request.args pagination_config = PaginationConfig.from_request(request) include_archived = request.args.get("archived", None) == ["true"] content = yield self.initial_sync_handler.snapshot_all_rooms( user_id=requester.user.to_string(), pagin_config=pagination_config, as_client_event=as_client_event, include_archived=include_archived, ) defer.returnValue((200, content)) def register_servlets(hs, http_server): InitialSyncRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v1/login.py000066400000000000000000000501561317335640100212160ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.errors import SynapseError, LoginError, Codes from synapse.types import UserID from synapse.http.server import finish_request from synapse.http.servlet import parse_json_object_from_request from synapse.util.msisdn import phone_number_to_msisdn from .base import ClientV1RestServlet, client_path_patterns import simplejson as json import urllib import urlparse import logging from saml2 import BINDING_HTTP_POST from saml2 import config from saml2.client import Saml2Client import xml.etree.ElementTree as ET from twisted.web.client import PartialDownloadError logger = logging.getLogger(__name__) def login_submission_legacy_convert(submission): """ If the input login submission is an old style object (ie. with top-level user / medium / address) convert it to a typed object. """ if "user" in submission: submission["identifier"] = { "type": "m.id.user", "user": submission["user"], } del submission["user"] if "medium" in submission and "address" in submission: submission["identifier"] = { "type": "m.id.thirdparty", "medium": submission["medium"], "address": submission["address"], } del submission["medium"] del submission["address"] def login_id_thirdparty_from_phone(identifier): """ Convert a phone login identifier type to a generic threepid identifier Args: identifier(dict): Login identifier dict of type 'm.id.phone' Returns: Login identifier dict of type 'm.id.threepid' """ if "country" not in identifier or "number" not in identifier: raise SynapseError(400, "Invalid phone-type identifier") msisdn = phone_number_to_msisdn(identifier["country"], identifier["number"]) return { "type": "m.id.thirdparty", "medium": "msisdn", "address": msisdn, } class LoginRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/login$") PASS_TYPE = "m.login.password" SAML2_TYPE = "m.login.saml2" CAS_TYPE = "m.login.cas" TOKEN_TYPE = "m.login.token" JWT_TYPE = "m.login.jwt" def __init__(self, hs): super(LoginRestServlet, self).__init__(hs) self.idp_redirect_url = hs.config.saml2_idp_redirect_url self.password_enabled = hs.config.password_enabled self.saml2_enabled = hs.config.saml2_enabled self.jwt_enabled = hs.config.jwt_enabled self.jwt_secret = hs.config.jwt_secret self.jwt_algorithm = hs.config.jwt_algorithm self.cas_enabled = hs.config.cas_enabled self.auth_handler = self.hs.get_auth_handler() self.device_handler = self.hs.get_device_handler() self.handlers = hs.get_handlers() def on_GET(self, request): flows = [] if self.jwt_enabled: flows.append({"type": LoginRestServlet.JWT_TYPE}) if self.saml2_enabled: flows.append({"type": LoginRestServlet.SAML2_TYPE}) if self.cas_enabled: flows.append({"type": LoginRestServlet.CAS_TYPE}) # While its valid for us to advertise this login type generally, # synapse currently only gives out these tokens as part of the # CAS login flow. # Generally we don't want to advertise login flows that clients # don't know how to implement, since they (currently) will always # fall back to the fallback API if they don't understand one of the # login flow types returned. flows.append({"type": LoginRestServlet.TOKEN_TYPE}) if self.password_enabled: flows.append({"type": LoginRestServlet.PASS_TYPE}) return (200, {"flows": flows}) def on_OPTIONS(self, request): return (200, {}) @defer.inlineCallbacks def on_POST(self, request): login_submission = parse_json_object_from_request(request) try: if login_submission["type"] == LoginRestServlet.PASS_TYPE: if not self.password_enabled: raise SynapseError(400, "Password login has been disabled.") result = yield self.do_password_login(login_submission) defer.returnValue(result) elif self.saml2_enabled and (login_submission["type"] == LoginRestServlet.SAML2_TYPE): relay_state = "" if "relay_state" in login_submission: relay_state = "&RelayState=" + urllib.quote( login_submission["relay_state"]) result = { "uri": "%s%s" % (self.idp_redirect_url, relay_state) } defer.returnValue((200, result)) elif self.jwt_enabled and (login_submission["type"] == LoginRestServlet.JWT_TYPE): result = yield self.do_jwt_login(login_submission) defer.returnValue(result) elif login_submission["type"] == LoginRestServlet.TOKEN_TYPE: result = yield self.do_token_login(login_submission) defer.returnValue(result) else: raise SynapseError(400, "Bad login type.") except KeyError: raise SynapseError(400, "Missing JSON keys.") @defer.inlineCallbacks def do_password_login(self, login_submission): if "password" not in login_submission: raise SynapseError(400, "Missing parameter: password") login_submission_legacy_convert(login_submission) if "identifier" not in login_submission: raise SynapseError(400, "Missing param: identifier") identifier = login_submission["identifier"] if "type" not in identifier: raise SynapseError(400, "Login identifier has no type") # convert phone type identifiers to generic threepids if identifier["type"] == "m.id.phone": identifier = login_id_thirdparty_from_phone(identifier) # convert threepid identifiers to user IDs if identifier["type"] == "m.id.thirdparty": if 'medium' not in identifier or 'address' not in identifier: raise SynapseError(400, "Invalid thirdparty identifier") address = identifier['address'] if identifier['medium'] == 'email': # For emails, transform the address to lowercase. # We store all email addreses as lowercase in the DB. # (See add_threepid in synapse/handlers/auth.py) address = address.lower() user_id = yield self.hs.get_datastore().get_user_id_by_threepid( identifier['medium'], address ) if not user_id: raise LoginError(403, "", errcode=Codes.FORBIDDEN) identifier = { "type": "m.id.user", "user": user_id, } # by this point, the identifier should be an m.id.user: if it's anything # else, we haven't understood it. if identifier["type"] != "m.id.user": raise SynapseError(400, "Unknown login identifier type") if "user" not in identifier: raise SynapseError(400, "User identifier is missing 'user' key") user_id = identifier["user"] if not user_id.startswith('@'): user_id = UserID.create( user_id, self.hs.hostname ).to_string() auth_handler = self.auth_handler user_id = yield auth_handler.validate_password_login( user_id=user_id, password=login_submission["password"], ) device_id = yield self._register_device(user_id, login_submission) access_token = yield auth_handler.get_access_token_for_user_id( user_id, device_id, login_submission.get("initial_device_display_name"), ) result = { "user_id": user_id, # may have changed "access_token": access_token, "home_server": self.hs.hostname, "device_id": device_id, } defer.returnValue((200, result)) @defer.inlineCallbacks def do_token_login(self, login_submission): token = login_submission['token'] auth_handler = self.auth_handler user_id = ( yield auth_handler.validate_short_term_login_token_and_get_user_id(token) ) device_id = yield self._register_device(user_id, login_submission) access_token = yield auth_handler.get_access_token_for_user_id( user_id, device_id, login_submission.get("initial_device_display_name"), ) result = { "user_id": user_id, # may have changed "access_token": access_token, "home_server": self.hs.hostname, "device_id": device_id, } defer.returnValue((200, result)) @defer.inlineCallbacks def do_jwt_login(self, login_submission): token = login_submission.get("token", None) if token is None: raise LoginError( 401, "Token field for JWT is missing", errcode=Codes.UNAUTHORIZED ) import jwt from jwt.exceptions import InvalidTokenError try: payload = jwt.decode(token, self.jwt_secret, algorithms=[self.jwt_algorithm]) except jwt.ExpiredSignatureError: raise LoginError(401, "JWT expired", errcode=Codes.UNAUTHORIZED) except InvalidTokenError: raise LoginError(401, "Invalid JWT", errcode=Codes.UNAUTHORIZED) user = payload.get("sub", None) if user is None: raise LoginError(401, "Invalid JWT", errcode=Codes.UNAUTHORIZED) user_id = UserID.create(user, self.hs.hostname).to_string() auth_handler = self.auth_handler registered_user_id = yield auth_handler.check_user_exists(user_id) if registered_user_id: device_id = yield self._register_device( registered_user_id, login_submission ) access_token = yield auth_handler.get_access_token_for_user_id( registered_user_id, device_id, login_submission.get("initial_device_display_name"), ) result = { "user_id": registered_user_id, "access_token": access_token, "home_server": self.hs.hostname, } else: # TODO: we should probably check that the register isn't going # to fonx/change our user_id before registering the device device_id = yield self._register_device(user_id, login_submission) user_id, access_token = ( yield self.handlers.registration_handler.register(localpart=user) ) result = { "user_id": user_id, # may have changed "access_token": access_token, "home_server": self.hs.hostname, } defer.returnValue((200, result)) def _register_device(self, user_id, login_submission): """Register a device for a user. This is called after the user's credentials have been validated, but before the access token has been issued. Args: (str) user_id: full canonical @user:id (object) login_submission: dictionary supplied to /login call, from which we pull device_id and initial_device_name Returns: defer.Deferred: (str) device_id """ device_id = login_submission.get("device_id") initial_display_name = login_submission.get( "initial_device_display_name") return self.device_handler.check_device_registered( user_id, device_id, initial_display_name ) class SAML2RestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/login/saml2", releases=()) def __init__(self, hs): super(SAML2RestServlet, self).__init__(hs) self.sp_config = hs.config.saml2_config_path self.handlers = hs.get_handlers() @defer.inlineCallbacks def on_POST(self, request): saml2_auth = None try: conf = config.SPConfig() conf.load_file(self.sp_config) SP = Saml2Client(conf) saml2_auth = SP.parse_authn_request_response( request.args['SAMLResponse'][0], BINDING_HTTP_POST) except Exception as e: # Not authenticated logger.exception(e) if saml2_auth and saml2_auth.status_ok() and not saml2_auth.not_signed: username = saml2_auth.name_id.text handler = self.handlers.registration_handler (user_id, token) = yield handler.register_saml2(username) # Forward to the RelayState callback along with ava if 'RelayState' in request.args: request.redirect(urllib.unquote( request.args['RelayState'][0]) + '?status=authenticated&access_token=' + token + '&user_id=' + user_id + '&ava=' + urllib.quote(json.dumps(saml2_auth.ava))) finish_request(request) defer.returnValue(None) defer.returnValue((200, {"status": "authenticated", "user_id": user_id, "token": token, "ava": saml2_auth.ava})) elif 'RelayState' in request.args: request.redirect(urllib.unquote( request.args['RelayState'][0]) + '?status=not_authenticated') finish_request(request) defer.returnValue(None) defer.returnValue((200, {"status": "not_authenticated"})) class CasRedirectServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/login/cas/redirect", releases=()) def __init__(self, hs): super(CasRedirectServlet, self).__init__(hs) self.cas_server_url = hs.config.cas_server_url self.cas_service_url = hs.config.cas_service_url def on_GET(self, request): args = request.args if "redirectUrl" not in args: return (400, "Redirect URL not specified for CAS auth") client_redirect_url_param = urllib.urlencode({ "redirectUrl": args["redirectUrl"][0] }) hs_redirect_url = self.cas_service_url + "/_matrix/client/api/v1/login/cas/ticket" service_param = urllib.urlencode({ "service": "%s?%s" % (hs_redirect_url, client_redirect_url_param) }) request.redirect("%s/login?%s" % (self.cas_server_url, service_param)) finish_request(request) class CasTicketServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/login/cas/ticket", releases=()) def __init__(self, hs): super(CasTicketServlet, self).__init__(hs) self.cas_server_url = hs.config.cas_server_url self.cas_service_url = hs.config.cas_service_url self.cas_required_attributes = hs.config.cas_required_attributes self.auth_handler = hs.get_auth_handler() self.handlers = hs.get_handlers() self.macaroon_gen = hs.get_macaroon_generator() @defer.inlineCallbacks def on_GET(self, request): client_redirect_url = request.args["redirectUrl"][0] http_client = self.hs.get_simple_http_client() uri = self.cas_server_url + "/proxyValidate" args = { "ticket": request.args["ticket"], "service": self.cas_service_url } try: body = yield http_client.get_raw(uri, args) except PartialDownloadError as pde: # Twisted raises this error if the connection is closed, # even if that's being used old-http style to signal end-of-data body = pde.response result = yield self.handle_cas_response(request, body, client_redirect_url) defer.returnValue(result) @defer.inlineCallbacks def handle_cas_response(self, request, cas_response_body, client_redirect_url): user, attributes = self.parse_cas_response(cas_response_body) for required_attribute, required_value in self.cas_required_attributes.items(): # If required attribute was not in CAS Response - Forbidden if required_attribute not in attributes: raise LoginError(401, "Unauthorized", errcode=Codes.UNAUTHORIZED) # Also need to check value if required_value is not None: actual_value = attributes[required_attribute] # If required attribute value does not match expected - Forbidden if required_value != actual_value: raise LoginError(401, "Unauthorized", errcode=Codes.UNAUTHORIZED) user_id = UserID.create(user, self.hs.hostname).to_string() auth_handler = self.auth_handler registered_user_id = yield auth_handler.check_user_exists(user_id) if not registered_user_id: registered_user_id, _ = ( yield self.handlers.registration_handler.register(localpart=user) ) login_token = self.macaroon_gen.generate_short_term_login_token( registered_user_id ) redirect_url = self.add_login_token_to_redirect_url(client_redirect_url, login_token) request.redirect(redirect_url) finish_request(request) def add_login_token_to_redirect_url(self, url, token): url_parts = list(urlparse.urlparse(url)) query = dict(urlparse.parse_qsl(url_parts[4])) query.update({"loginToken": token}) url_parts[4] = urllib.urlencode(query) return urlparse.urlunparse(url_parts) def parse_cas_response(self, cas_response_body): user = None attributes = {} try: root = ET.fromstring(cas_response_body) if not root.tag.endswith("serviceResponse"): raise Exception("root of CAS response is not serviceResponse") success = (root[0].tag.endswith("authenticationSuccess")) for child in root[0]: if child.tag.endswith("user"): user = child.text if child.tag.endswith("attributes"): for attribute in child: # ElementTree library expands the namespace in # attribute tags to the full URL of the namespace. # We don't care about namespace here and it will always # be encased in curly braces, so we remove them. tag = attribute.tag if "}" in tag: tag = tag.split("}")[1] attributes[tag] = attribute.text if user is None: raise Exception("CAS response does not contain user") except Exception: logger.error("Error parsing CAS response", exc_info=1) raise LoginError(401, "Invalid CAS response", errcode=Codes.UNAUTHORIZED) if not success: raise LoginError(401, "Unsuccessful CAS response", errcode=Codes.UNAUTHORIZED) return user, attributes def register_servlets(hs, http_server): LoginRestServlet(hs).register(http_server) if hs.config.saml2_enabled: SAML2RestServlet(hs).register(http_server) if hs.config.cas_enabled: CasRedirectServlet(hs).register(http_server) CasTicketServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v1/logout.py000066400000000000000000000037641317335640100214220ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.auth import get_access_token_from_request from .base import ClientV1RestServlet, client_path_patterns import logging logger = logging.getLogger(__name__) class LogoutRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/logout$") def __init__(self, hs): super(LogoutRestServlet, self).__init__(hs) self.store = hs.get_datastore() def on_OPTIONS(self, request): return (200, {}) @defer.inlineCallbacks def on_POST(self, request): access_token = get_access_token_from_request(request) yield self.store.delete_access_token(access_token) defer.returnValue((200, {})) class LogoutAllRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/logout/all$") def __init__(self, hs): super(LogoutAllRestServlet, self).__init__(hs) self.store = hs.get_datastore() self.auth = hs.get_auth() def on_OPTIONS(self, request): return (200, {}) @defer.inlineCallbacks def on_POST(self, request): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() yield self.store.user_delete_access_tokens(user_id) defer.returnValue((200, {})) def register_servlets(hs, http_server): LogoutRestServlet(hs).register(http_server) LogoutAllRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v1/presence.py000066400000000000000000000126001317335640100217020ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This module contains REST servlets to do with presence: /presence/ """ from twisted.internet import defer from synapse.api.errors import SynapseError, AuthError from synapse.types import UserID from synapse.handlers.presence import format_user_presence_state from synapse.http.servlet import parse_json_object_from_request from .base import ClientV1RestServlet, client_path_patterns import logging logger = logging.getLogger(__name__) class PresenceStatusRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/presence/(?P[^/]*)/status") def __init__(self, hs): super(PresenceStatusRestServlet, self).__init__(hs) self.presence_handler = hs.get_presence_handler() self.clock = hs.get_clock() @defer.inlineCallbacks def on_GET(self, request, user_id): requester = yield self.auth.get_user_by_req(request) user = UserID.from_string(user_id) if requester.user != user: allowed = yield self.presence_handler.is_visible( observed_user=user, observer_user=requester.user, ) if not allowed: raise AuthError(403, "You are not allowed to see their presence.") state = yield self.presence_handler.get_state(target_user=user) state = format_user_presence_state(state, self.clock.time_msec()) defer.returnValue((200, state)) @defer.inlineCallbacks def on_PUT(self, request, user_id): requester = yield self.auth.get_user_by_req(request) user = UserID.from_string(user_id) if requester.user != user: raise AuthError(403, "Can only set your own presence state") state = {} content = parse_json_object_from_request(request) try: state["presence"] = content.pop("presence") if "status_msg" in content: state["status_msg"] = content.pop("status_msg") if not isinstance(state["status_msg"], basestring): raise SynapseError(400, "status_msg must be a string.") if content: raise KeyError() except SynapseError as e: raise e except: raise SynapseError(400, "Unable to parse state") yield self.presence_handler.set_state(user, state) defer.returnValue((200, {})) def on_OPTIONS(self, request): return (200, {}) class PresenceListRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/presence/list/(?P[^/]*)") def __init__(self, hs): super(PresenceListRestServlet, self).__init__(hs) self.presence_handler = hs.get_presence_handler() @defer.inlineCallbacks def on_GET(self, request, user_id): requester = yield self.auth.get_user_by_req(request) user = UserID.from_string(user_id) if not self.hs.is_mine(user): raise SynapseError(400, "User not hosted on this Home Server") if requester.user != user: raise SynapseError(400, "Cannot get another user's presence list") presence = yield self.presence_handler.get_presence_list( observer_user=user, accepted=True ) defer.returnValue((200, presence)) @defer.inlineCallbacks def on_POST(self, request, user_id): requester = yield self.auth.get_user_by_req(request) user = UserID.from_string(user_id) if not self.hs.is_mine(user): raise SynapseError(400, "User not hosted on this Home Server") if requester.user != user: raise SynapseError( 400, "Cannot modify another user's presence list") content = parse_json_object_from_request(request) if "invite" in content: for u in content["invite"]: if not isinstance(u, basestring): raise SynapseError(400, "Bad invite value.") if len(u) == 0: continue invited_user = UserID.from_string(u) yield self.presence_handler.send_presence_invite( observer_user=user, observed_user=invited_user ) if "drop" in content: for u in content["drop"]: if not isinstance(u, basestring): raise SynapseError(400, "Bad drop value.") if len(u) == 0: continue dropped_user = UserID.from_string(u) yield self.presence_handler.drop( observer_user=user, observed_user=dropped_user ) defer.returnValue((200, {})) def on_OPTIONS(self, request): return (200, {}) def register_servlets(hs, http_server): PresenceStatusRestServlet(hs).register(http_server) PresenceListRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v1/profile.py000066400000000000000000000105011317335640100215340ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This module contains REST servlets to do with profile: /profile/ """ from twisted.internet import defer from .base import ClientV1RestServlet, client_path_patterns from synapse.types import UserID from synapse.http.servlet import parse_json_object_from_request class ProfileDisplaynameRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/profile/(?P[^/]*)/displayname") def __init__(self, hs): super(ProfileDisplaynameRestServlet, self).__init__(hs) self.profile_handler = hs.get_profile_handler() @defer.inlineCallbacks def on_GET(self, request, user_id): user = UserID.from_string(user_id) displayname = yield self.profile_handler.get_displayname( user, ) ret = {} if displayname is not None: ret["displayname"] = displayname defer.returnValue((200, ret)) @defer.inlineCallbacks def on_PUT(self, request, user_id): requester = yield self.auth.get_user_by_req(request, allow_guest=True) user = UserID.from_string(user_id) is_admin = yield self.auth.is_server_admin(requester.user) content = parse_json_object_from_request(request) try: new_name = content["displayname"] except: defer.returnValue((400, "Unable to parse name")) yield self.profile_handler.set_displayname( user, requester, new_name, is_admin) defer.returnValue((200, {})) def on_OPTIONS(self, request, user_id): return (200, {}) class ProfileAvatarURLRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/profile/(?P[^/]*)/avatar_url") def __init__(self, hs): super(ProfileAvatarURLRestServlet, self).__init__(hs) self.profile_handler = hs.get_profile_handler() @defer.inlineCallbacks def on_GET(self, request, user_id): user = UserID.from_string(user_id) avatar_url = yield self.profile_handler.get_avatar_url( user, ) ret = {} if avatar_url is not None: ret["avatar_url"] = avatar_url defer.returnValue((200, ret)) @defer.inlineCallbacks def on_PUT(self, request, user_id): requester = yield self.auth.get_user_by_req(request) user = UserID.from_string(user_id) is_admin = yield self.auth.is_server_admin(requester.user) content = parse_json_object_from_request(request) try: new_name = content["avatar_url"] except: defer.returnValue((400, "Unable to parse name")) yield self.profile_handler.set_avatar_url( user, requester, new_name, is_admin) defer.returnValue((200, {})) def on_OPTIONS(self, request, user_id): return (200, {}) class ProfileRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/profile/(?P[^/]*)") def __init__(self, hs): super(ProfileRestServlet, self).__init__(hs) self.profile_handler = hs.get_profile_handler() @defer.inlineCallbacks def on_GET(self, request, user_id): user = UserID.from_string(user_id) displayname = yield self.profile_handler.get_displayname( user, ) avatar_url = yield self.profile_handler.get_avatar_url( user, ) ret = {} if displayname is not None: ret["displayname"] = displayname if avatar_url is not None: ret["avatar_url"] = avatar_url defer.returnValue((200, ret)) def register_servlets(hs, http_server): ProfileDisplaynameRestServlet(hs).register(http_server) ProfileAvatarURLRestServlet(hs).register(http_server) ProfileRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v1/push_rule.py000066400000000000000000000252141317335640100221110ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.errors import ( SynapseError, UnrecognizedRequestError, NotFoundError, StoreError ) from .base import ClientV1RestServlet, client_path_patterns from synapse.storage.push_rule import ( InconsistentRuleException, RuleNotFoundException ) from synapse.push.clientformat import format_push_rules_for_user from synapse.push.baserules import BASE_RULE_IDS from synapse.push.rulekinds import PRIORITY_CLASS_MAP from synapse.http.servlet import parse_json_value_from_request class PushRuleRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/pushrules/.*$") SLIGHTLY_PEDANTIC_TRAILING_SLASH_ERROR = ( "Unrecognised request: You probably wanted a trailing slash") def __init__(self, hs): super(PushRuleRestServlet, self).__init__(hs) self.store = hs.get_datastore() self.notifier = hs.get_notifier() @defer.inlineCallbacks def on_PUT(self, request): spec = _rule_spec_from_path(request.postpath) try: priority_class = _priority_class_from_spec(spec) except InvalidRuleException as e: raise SynapseError(400, e.message) requester = yield self.auth.get_user_by_req(request) if '/' in spec['rule_id'] or '\\' in spec['rule_id']: raise SynapseError(400, "rule_id may not contain slashes") content = parse_json_value_from_request(request) user_id = requester.user.to_string() if 'attr' in spec: yield self.set_rule_attr(user_id, spec, content) self.notify_user(user_id) defer.returnValue((200, {})) if spec['rule_id'].startswith('.'): # Rule ids starting with '.' are reserved for server default rules. raise SynapseError(400, "cannot add new rule_ids that start with '.'") try: (conditions, actions) = _rule_tuple_from_request_object( spec['template'], spec['rule_id'], content, ) except InvalidRuleException as e: raise SynapseError(400, e.message) before = request.args.get("before", None) if before: before = _namespaced_rule_id(spec, before[0]) after = request.args.get("after", None) if after: after = _namespaced_rule_id(spec, after[0]) try: yield self.store.add_push_rule( user_id=user_id, rule_id=_namespaced_rule_id_from_spec(spec), priority_class=priority_class, conditions=conditions, actions=actions, before=before, after=after ) self.notify_user(user_id) except InconsistentRuleException as e: raise SynapseError(400, e.message) except RuleNotFoundException as e: raise SynapseError(400, e.message) defer.returnValue((200, {})) @defer.inlineCallbacks def on_DELETE(self, request): spec = _rule_spec_from_path(request.postpath) requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() namespaced_rule_id = _namespaced_rule_id_from_spec(spec) try: yield self.store.delete_push_rule( user_id, namespaced_rule_id ) self.notify_user(user_id) defer.returnValue((200, {})) except StoreError as e: if e.code == 404: raise NotFoundError() else: raise @defer.inlineCallbacks def on_GET(self, request): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() # we build up the full structure and then decide which bits of it # to send which means doing unnecessary work sometimes but is # is probably not going to make a whole lot of difference rules = yield self.store.get_push_rules_for_user(user_id) rules = format_push_rules_for_user(requester.user, rules) path = request.postpath[1:] if path == []: # we're a reference impl: pedantry is our job. raise UnrecognizedRequestError( PushRuleRestServlet.SLIGHTLY_PEDANTIC_TRAILING_SLASH_ERROR ) if path[0] == '': defer.returnValue((200, rules)) elif path[0] == 'global': path = path[1:] result = _filter_ruleset_with_path(rules['global'], path) defer.returnValue((200, result)) else: raise UnrecognizedRequestError() def on_OPTIONS(self, _): return 200, {} def notify_user(self, user_id): stream_id, _ = self.store.get_push_rules_stream_token() self.notifier.on_new_event( "push_rules_key", stream_id, users=[user_id] ) def set_rule_attr(self, user_id, spec, val): if spec['attr'] == 'enabled': if isinstance(val, dict) and "enabled" in val: val = val["enabled"] if not isinstance(val, bool): # Legacy fallback # This should *actually* take a dict, but many clients pass # bools directly, so let's not break them. raise SynapseError(400, "Value for 'enabled' must be boolean") namespaced_rule_id = _namespaced_rule_id_from_spec(spec) return self.store.set_push_rule_enabled( user_id, namespaced_rule_id, val ) elif spec['attr'] == 'actions': actions = val.get('actions') _check_actions(actions) namespaced_rule_id = _namespaced_rule_id_from_spec(spec) rule_id = spec['rule_id'] is_default_rule = rule_id.startswith(".") if is_default_rule: if namespaced_rule_id not in BASE_RULE_IDS: raise SynapseError(404, "Unknown rule %r" % (namespaced_rule_id,)) return self.store.set_push_rule_actions( user_id, namespaced_rule_id, actions, is_default_rule ) else: raise UnrecognizedRequestError() def _rule_spec_from_path(path): if len(path) < 2: raise UnrecognizedRequestError() if path[0] != 'pushrules': raise UnrecognizedRequestError() scope = path[1] path = path[2:] if scope != 'global': raise UnrecognizedRequestError() if len(path) == 0: raise UnrecognizedRequestError() template = path[0] path = path[1:] if len(path) == 0 or len(path[0]) == 0: raise UnrecognizedRequestError() rule_id = path[0] spec = { 'scope': scope, 'template': template, 'rule_id': rule_id } path = path[1:] if len(path) > 0 and len(path[0]) > 0: spec['attr'] = path[0] return spec def _rule_tuple_from_request_object(rule_template, rule_id, req_obj): if rule_template in ['override', 'underride']: if 'conditions' not in req_obj: raise InvalidRuleException("Missing 'conditions'") conditions = req_obj['conditions'] for c in conditions: if 'kind' not in c: raise InvalidRuleException("Condition without 'kind'") elif rule_template == 'room': conditions = [{ 'kind': 'event_match', 'key': 'room_id', 'pattern': rule_id }] elif rule_template == 'sender': conditions = [{ 'kind': 'event_match', 'key': 'user_id', 'pattern': rule_id }] elif rule_template == 'content': if 'pattern' not in req_obj: raise InvalidRuleException("Content rule missing 'pattern'") pat = req_obj['pattern'] conditions = [{ 'kind': 'event_match', 'key': 'content.body', 'pattern': pat }] else: raise InvalidRuleException("Unknown rule template: %s" % (rule_template,)) if 'actions' not in req_obj: raise InvalidRuleException("No actions found") actions = req_obj['actions'] _check_actions(actions) return conditions, actions def _check_actions(actions): if not isinstance(actions, list): raise InvalidRuleException("No actions found") for a in actions: if a in ['notify', 'dont_notify', 'coalesce']: pass elif isinstance(a, dict) and 'set_tweak' in a: pass else: raise InvalidRuleException("Unrecognised action") def _filter_ruleset_with_path(ruleset, path): if path == []: raise UnrecognizedRequestError( PushRuleRestServlet.SLIGHTLY_PEDANTIC_TRAILING_SLASH_ERROR ) if path[0] == '': return ruleset template_kind = path[0] if template_kind not in ruleset: raise UnrecognizedRequestError() path = path[1:] if path == []: raise UnrecognizedRequestError( PushRuleRestServlet.SLIGHTLY_PEDANTIC_TRAILING_SLASH_ERROR ) if path[0] == '': return ruleset[template_kind] rule_id = path[0] the_rule = None for r in ruleset[template_kind]: if r['rule_id'] == rule_id: the_rule = r if the_rule is None: raise NotFoundError path = path[1:] if len(path) == 0: return the_rule attr = path[0] if attr in the_rule: # Make sure we return a JSON object as the attribute may be a # JSON value. return {attr: the_rule[attr]} else: raise UnrecognizedRequestError() def _priority_class_from_spec(spec): if spec['template'] not in PRIORITY_CLASS_MAP.keys(): raise InvalidRuleException("Unknown template: %s" % (spec['template'])) pc = PRIORITY_CLASS_MAP[spec['template']] return pc def _namespaced_rule_id_from_spec(spec): return _namespaced_rule_id(spec, spec['rule_id']) def _namespaced_rule_id(spec, rule_id): return "global/%s/%s" % (spec['template'], rule_id) class InvalidRuleException(Exception): pass def register_servlets(hs, http_server): PushRuleRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v1/pusher.py000066400000000000000000000143771317335640100214210ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.errors import SynapseError, Codes from synapse.push import PusherConfigException from synapse.http.servlet import ( parse_json_object_from_request, parse_string, RestServlet ) from synapse.http.server import finish_request from synapse.api.errors import StoreError from .base import ClientV1RestServlet, client_path_patterns import logging logger = logging.getLogger(__name__) class PushersRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/pushers$") def __init__(self, hs): super(PushersRestServlet, self).__init__(hs) @defer.inlineCallbacks def on_GET(self, request): requester = yield self.auth.get_user_by_req(request) user = requester.user pushers = yield self.hs.get_datastore().get_pushers_by_user_id( user.to_string() ) allowed_keys = [ "app_display_name", "app_id", "data", "device_display_name", "kind", "lang", "profile_tag", "pushkey", ] for p in pushers: for k, v in p.items(): if k not in allowed_keys: del p[k] defer.returnValue((200, {"pushers": pushers})) def on_OPTIONS(self, _): return 200, {} class PushersSetRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/pushers/set$") def __init__(self, hs): super(PushersSetRestServlet, self).__init__(hs) self.notifier = hs.get_notifier() self.pusher_pool = self.hs.get_pusherpool() @defer.inlineCallbacks def on_POST(self, request): requester = yield self.auth.get_user_by_req(request) user = requester.user content = parse_json_object_from_request(request) if ('pushkey' in content and 'app_id' in content and 'kind' in content and content['kind'] is None): yield self.pusher_pool.remove_pusher( content['app_id'], content['pushkey'], user_id=user.to_string() ) defer.returnValue((200, {})) reqd = ['kind', 'app_id', 'app_display_name', 'device_display_name', 'pushkey', 'lang', 'data'] missing = [] for i in reqd: if i not in content: missing.append(i) if len(missing): raise SynapseError(400, "Missing parameters: " + ','.join(missing), errcode=Codes.MISSING_PARAM) logger.debug("set pushkey %s to kind %s", content['pushkey'], content['kind']) logger.debug("Got pushers request with body: %r", content) append = False if 'append' in content: append = content['append'] if not append: yield self.pusher_pool.remove_pushers_by_app_id_and_pushkey_not_user( app_id=content['app_id'], pushkey=content['pushkey'], not_user_id=user.to_string() ) try: yield self.pusher_pool.add_pusher( user_id=user.to_string(), access_token=requester.access_token_id, kind=content['kind'], app_id=content['app_id'], app_display_name=content['app_display_name'], device_display_name=content['device_display_name'], pushkey=content['pushkey'], lang=content['lang'], data=content['data'], profile_tag=content.get('profile_tag', ""), ) except PusherConfigException as pce: raise SynapseError(400, "Config Error: " + pce.message, errcode=Codes.MISSING_PARAM) self.notifier.on_new_replication_data() defer.returnValue((200, {})) def on_OPTIONS(self, _): return 200, {} class PushersRemoveRestServlet(RestServlet): """ To allow pusher to be delete by clicking a link (ie. GET request) """ PATTERNS = client_path_patterns("/pushers/remove$") SUCCESS_HTML = "You have been unsubscribed" def __init__(self, hs): super(RestServlet, self).__init__() self.hs = hs self.notifier = hs.get_notifier() self.auth = hs.get_v1auth() self.pusher_pool = self.hs.get_pusherpool() @defer.inlineCallbacks def on_GET(self, request): requester = yield self.auth.get_user_by_req(request, rights="delete_pusher") user = requester.user app_id = parse_string(request, "app_id", required=True) pushkey = parse_string(request, "pushkey", required=True) try: yield self.pusher_pool.remove_pusher( app_id=app_id, pushkey=pushkey, user_id=user.to_string(), ) except StoreError as se: if se.code != 404: # This is fine: they're already unsubscribed raise self.notifier.on_new_replication_data() request.setResponseCode(200) request.setHeader(b"Content-Type", b"text/html; charset=utf-8") request.setHeader(b"Server", self.hs.version_string) request.setHeader(b"Content-Length", b"%d" % ( len(PushersRemoveRestServlet.SUCCESS_HTML), )) request.write(PushersRemoveRestServlet.SUCCESS_HTML) finish_request(request) defer.returnValue(None) def on_OPTIONS(self, _): return 200, {} def register_servlets(hs, http_server): PushersRestServlet(hs).register(http_server) PushersSetRestServlet(hs).register(http_server) PushersRemoveRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v1/register.py000066400000000000000000000365051317335640100217340ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module contains REST servlets to do with registration: /register""" from twisted.internet import defer from synapse.api.errors import SynapseError, Codes from synapse.api.constants import LoginType from synapse.api.auth import get_access_token_from_request from .base import ClientV1RestServlet, client_path_patterns import synapse.util.stringutils as stringutils from synapse.http.servlet import parse_json_object_from_request from synapse.types import create_requester from synapse.util.async import run_on_reactor from hashlib import sha1 import hmac import logging logger = logging.getLogger(__name__) # We ought to be using hmac.compare_digest() but on older pythons it doesn't # exist. It's a _really minor_ security flaw to use plain string comparison # because the timing attack is so obscured by all the other code here it's # unlikely to make much difference if hasattr(hmac, "compare_digest"): compare_digest = hmac.compare_digest else: def compare_digest(a, b): return a == b class RegisterRestServlet(ClientV1RestServlet): """Handles registration with the home server. This servlet is in control of the registration flow; the registration handler doesn't have a concept of multi-stages or sessions. """ PATTERNS = client_path_patterns("/register$", releases=(), include_in_unstable=False) def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): server """ super(RegisterRestServlet, self).__init__(hs) # sessions are stored as: # self.sessions = { # "session_id" : { __session_dict__ } # } # TODO: persistent storage self.sessions = {} self.enable_registration = hs.config.enable_registration self.auth_handler = hs.get_auth_handler() self.handlers = hs.get_handlers() def on_GET(self, request): if self.hs.config.enable_registration_captcha: return ( 200, {"flows": [ { "type": LoginType.RECAPTCHA, "stages": [ LoginType.RECAPTCHA, LoginType.EMAIL_IDENTITY, LoginType.PASSWORD ] }, { "type": LoginType.RECAPTCHA, "stages": [LoginType.RECAPTCHA, LoginType.PASSWORD] } ]} ) else: return ( 200, {"flows": [ { "type": LoginType.EMAIL_IDENTITY, "stages": [ LoginType.EMAIL_IDENTITY, LoginType.PASSWORD ] }, { "type": LoginType.PASSWORD } ]} ) @defer.inlineCallbacks def on_POST(self, request): register_json = parse_json_object_from_request(request) session = (register_json["session"] if "session" in register_json else None) login_type = None if "type" not in register_json: raise SynapseError(400, "Missing 'type' key.") try: login_type = register_json["type"] is_application_server = login_type == LoginType.APPLICATION_SERVICE is_using_shared_secret = login_type == LoginType.SHARED_SECRET can_register = ( self.enable_registration or is_application_server or is_using_shared_secret ) if not can_register: raise SynapseError(403, "Registration has been disabled") stages = { LoginType.RECAPTCHA: self._do_recaptcha, LoginType.PASSWORD: self._do_password, LoginType.EMAIL_IDENTITY: self._do_email_identity, LoginType.APPLICATION_SERVICE: self._do_app_service, LoginType.SHARED_SECRET: self._do_shared_secret, } session_info = self._get_session_info(request, session) logger.debug("%s : session info %s request info %s", login_type, session_info, register_json) response = yield stages[login_type]( request, register_json, session_info ) if "access_token" not in response: # isn't a final response response["session"] = session_info["id"] defer.returnValue((200, response)) except KeyError as e: logger.exception(e) raise SynapseError(400, "Missing JSON keys for login type %s." % ( login_type, )) def on_OPTIONS(self, request): return (200, {}) def _get_session_info(self, request, session_id): if not session_id: # create a new session while session_id is None or session_id in self.sessions: session_id = stringutils.random_string(24) self.sessions[session_id] = { "id": session_id, LoginType.EMAIL_IDENTITY: False, LoginType.RECAPTCHA: False } return self.sessions[session_id] def _save_session(self, session): # TODO: Persistent storage logger.debug("Saving session %s", session) self.sessions[session["id"]] = session def _remove_session(self, session): logger.debug("Removing session %s", session) self.sessions.pop(session["id"]) @defer.inlineCallbacks def _do_recaptcha(self, request, register_json, session): if not self.hs.config.enable_registration_captcha: raise SynapseError(400, "Captcha not required.") yield self._check_recaptcha(request, register_json, session) session[LoginType.RECAPTCHA] = True # mark captcha as done self._save_session(session) defer.returnValue({ "next": [LoginType.PASSWORD, LoginType.EMAIL_IDENTITY] }) @defer.inlineCallbacks def _check_recaptcha(self, request, register_json, session): if ("captcha_bypass_hmac" in register_json and self.hs.config.captcha_bypass_secret): if "user" not in register_json: raise SynapseError(400, "Captcha bypass needs 'user'") want = hmac.new( key=self.hs.config.captcha_bypass_secret, msg=register_json["user"], digestmod=sha1, ).hexdigest() # str() because otherwise hmac complains that 'unicode' does not # have the buffer interface got = str(register_json["captcha_bypass_hmac"]) if compare_digest(want, got): session["user"] = register_json["user"] defer.returnValue(None) else: raise SynapseError( 400, "Captcha bypass HMAC incorrect", errcode=Codes.CAPTCHA_NEEDED ) challenge = None user_response = None try: challenge = register_json["challenge"] user_response = register_json["response"] except KeyError: raise SynapseError(400, "Captcha response is required", errcode=Codes.CAPTCHA_NEEDED) ip_addr = self.hs.get_ip_from_request(request) handler = self.handlers.registration_handler yield handler.check_recaptcha( ip_addr, self.hs.config.recaptcha_private_key, challenge, user_response ) @defer.inlineCallbacks def _do_email_identity(self, request, register_json, session): if (self.hs.config.enable_registration_captcha and not session[LoginType.RECAPTCHA]): raise SynapseError(400, "Captcha is required.") threepidCreds = register_json['threepidCreds'] handler = self.handlers.registration_handler logger.debug("Registering email. threepidcreds: %s" % (threepidCreds)) yield handler.register_email(threepidCreds) session["threepidCreds"] = threepidCreds # store creds for next stage session[LoginType.EMAIL_IDENTITY] = True # mark email as done self._save_session(session) defer.returnValue({ "next": LoginType.PASSWORD }) @defer.inlineCallbacks def _do_password(self, request, register_json, session): yield run_on_reactor() if (self.hs.config.enable_registration_captcha and not session[LoginType.RECAPTCHA]): # captcha should've been done by this stage! raise SynapseError(400, "Captcha is required.") if ("user" in session and "user" in register_json and session["user"] != register_json["user"]): raise SynapseError( 400, "Cannot change user ID during registration" ) password = register_json["password"].encode("utf-8") desired_user_id = ( register_json["user"].encode("utf-8") if "user" in register_json else None ) handler = self.handlers.registration_handler (user_id, token) = yield handler.register( localpart=desired_user_id, password=password ) if session[LoginType.EMAIL_IDENTITY]: logger.debug("Binding emails %s to %s" % ( session["threepidCreds"], user_id) ) yield handler.bind_emails(user_id, session["threepidCreds"]) result = { "user_id": user_id, "access_token": token, "home_server": self.hs.hostname, } self._remove_session(session) defer.returnValue(result) @defer.inlineCallbacks def _do_app_service(self, request, register_json, session): as_token = get_access_token_from_request(request) if "user" not in register_json: raise SynapseError(400, "Expected 'user' key.") user_localpart = register_json["user"].encode("utf-8") handler = self.handlers.registration_handler user_id = yield handler.appservice_register( user_localpart, as_token ) token = yield self.auth_handler.issue_access_token(user_id) self._remove_session(session) defer.returnValue({ "user_id": user_id, "access_token": token, "home_server": self.hs.hostname, }) @defer.inlineCallbacks def _do_shared_secret(self, request, register_json, session): yield run_on_reactor() if not isinstance(register_json.get("mac", None), basestring): raise SynapseError(400, "Expected mac.") if not isinstance(register_json.get("user", None), basestring): raise SynapseError(400, "Expected 'user' key.") if not isinstance(register_json.get("password", None), basestring): raise SynapseError(400, "Expected 'password' key.") if not self.hs.config.registration_shared_secret: raise SynapseError(400, "Shared secret registration is not enabled") user = register_json["user"].encode("utf-8") password = register_json["password"].encode("utf-8") admin = register_json.get("admin", None) # Its important to check as we use null bytes as HMAC field separators if "\x00" in user: raise SynapseError(400, "Invalid user") if "\x00" in password: raise SynapseError(400, "Invalid password") # str() because otherwise hmac complains that 'unicode' does not # have the buffer interface got_mac = str(register_json["mac"]) want_mac = hmac.new( key=self.hs.config.registration_shared_secret, digestmod=sha1, ) want_mac.update(user) want_mac.update("\x00") want_mac.update(password) want_mac.update("\x00") want_mac.update("admin" if admin else "notadmin") want_mac = want_mac.hexdigest() if compare_digest(want_mac, got_mac): handler = self.handlers.registration_handler user_id, token = yield handler.register( localpart=user, password=password, admin=bool(admin), ) self._remove_session(session) defer.returnValue({ "user_id": user_id, "access_token": token, "home_server": self.hs.hostname, }) else: raise SynapseError( 403, "HMAC incorrect", ) class CreateUserRestServlet(ClientV1RestServlet): """Handles user creation via a server-to-server interface """ PATTERNS = client_path_patterns("/createUser$", releases=()) def __init__(self, hs): super(CreateUserRestServlet, self).__init__(hs) self.store = hs.get_datastore() self.handlers = hs.get_handlers() @defer.inlineCallbacks def on_POST(self, request): user_json = parse_json_object_from_request(request) access_token = get_access_token_from_request(request) app_service = self.store.get_app_service_by_token( access_token ) if not app_service: raise SynapseError(403, "Invalid application service token.") requester = create_requester(app_service.sender) logger.debug("creating user: %s", user_json) response = yield self._do_create(requester, user_json) defer.returnValue((200, response)) def on_OPTIONS(self, request): return 403, {} @defer.inlineCallbacks def _do_create(self, requester, user_json): yield run_on_reactor() if "localpart" not in user_json: raise SynapseError(400, "Expected 'localpart' key.") if "displayname" not in user_json: raise SynapseError(400, "Expected 'displayname' key.") localpart = user_json["localpart"].encode("utf-8") displayname = user_json["displayname"].encode("utf-8") password_hash = user_json["password_hash"].encode("utf-8") \ if user_json.get("password_hash") else None handler = self.handlers.registration_handler user_id, token = yield handler.get_or_create_user( requester=requester, localpart=localpart, displayname=displayname, password_hash=password_hash ) defer.returnValue({ "user_id": user_id, "access_token": token, "home_server": self.hs.hostname, }) def register_servlets(hs, http_server): RegisterRestServlet(hs).register(http_server) CreateUserRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v1/room.py000066400000000000000000000670551317335640100210700ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This module contains REST servlets to do with rooms: /rooms/ """ from twisted.internet import defer from .base import ClientV1RestServlet, client_path_patterns from synapse.api.errors import SynapseError, Codes, AuthError from synapse.streams.config import PaginationConfig from synapse.api.constants import EventTypes, Membership from synapse.api.filtering import Filter from synapse.types import UserID, RoomID, RoomAlias, ThirdPartyInstanceID from synapse.events.utils import serialize_event, format_event_for_client_v2 from synapse.http.servlet import ( parse_json_object_from_request, parse_string, parse_integer ) import logging import urllib import ujson as json logger = logging.getLogger(__name__) class RoomCreateRestServlet(ClientV1RestServlet): # No PATTERN; we have custom dispatch rules here def __init__(self, hs): super(RoomCreateRestServlet, self).__init__(hs) self.handlers = hs.get_handlers() def register(self, http_server): PATTERNS = "/createRoom" register_txn_path(self, PATTERNS, http_server) # define CORS for all of /rooms in RoomCreateRestServlet for simplicity http_server.register_paths("OPTIONS", client_path_patterns("/rooms(?:/.*)?$"), self.on_OPTIONS) # define CORS for /createRoom[/txnid] http_server.register_paths("OPTIONS", client_path_patterns("/createRoom(?:/.*)?$"), self.on_OPTIONS) def on_PUT(self, request, txn_id): return self.txns.fetch_or_execute_request( request, self.on_POST, request ) @defer.inlineCallbacks def on_POST(self, request): requester = yield self.auth.get_user_by_req(request) handler = self.handlers.room_creation_handler info = yield handler.create_room( requester, self.get_room_config(request) ) defer.returnValue((200, info)) def get_room_config(self, request): user_supplied_config = parse_json_object_from_request(request) return user_supplied_config def on_OPTIONS(self, request): return (200, {}) # TODO: Needs unit testing for generic events class RoomStateEventRestServlet(ClientV1RestServlet): def __init__(self, hs): super(RoomStateEventRestServlet, self).__init__(hs) self.handlers = hs.get_handlers() def register(self, http_server): # /room/$roomid/state/$eventtype no_state_key = "/rooms/(?P[^/]*)/state/(?P[^/]*)$" # /room/$roomid/state/$eventtype/$statekey state_key = ("/rooms/(?P[^/]*)/state/" "(?P[^/]*)/(?P[^/]*)$") http_server.register_paths("GET", client_path_patterns(state_key), self.on_GET) http_server.register_paths("PUT", client_path_patterns(state_key), self.on_PUT) http_server.register_paths("GET", client_path_patterns(no_state_key), self.on_GET_no_state_key) http_server.register_paths("PUT", client_path_patterns(no_state_key), self.on_PUT_no_state_key) def on_GET_no_state_key(self, request, room_id, event_type): return self.on_GET(request, room_id, event_type, "") def on_PUT_no_state_key(self, request, room_id, event_type): return self.on_PUT(request, room_id, event_type, "") @defer.inlineCallbacks def on_GET(self, request, room_id, event_type, state_key): requester = yield self.auth.get_user_by_req(request, allow_guest=True) format = parse_string(request, "format", default="content", allowed_values=["content", "event"]) msg_handler = self.handlers.message_handler data = yield msg_handler.get_room_data( user_id=requester.user.to_string(), room_id=room_id, event_type=event_type, state_key=state_key, is_guest=requester.is_guest, ) if not data: raise SynapseError( 404, "Event not found.", errcode=Codes.NOT_FOUND ) if format == "event": event = format_event_for_client_v2(data.get_dict()) defer.returnValue((200, event)) elif format == "content": defer.returnValue((200, data.get_dict()["content"])) @defer.inlineCallbacks def on_PUT(self, request, room_id, event_type, state_key, txn_id=None): requester = yield self.auth.get_user_by_req(request) content = parse_json_object_from_request(request) event_dict = { "type": event_type, "content": content, "room_id": room_id, "sender": requester.user.to_string(), } if state_key is not None: event_dict["state_key"] = state_key if event_type == EventTypes.Member: membership = content.get("membership", None) event = yield self.handlers.room_member_handler.update_membership( requester, target=UserID.from_string(state_key), room_id=room_id, action=membership, content=content, ) else: msg_handler = self.handlers.message_handler event, context = yield msg_handler.create_event( requester, event_dict, token_id=requester.access_token_id, txn_id=txn_id, ) yield msg_handler.send_nonmember_event(requester, event, context) ret = {} if event: ret = {"event_id": event.event_id} defer.returnValue((200, ret)) # TODO: Needs unit testing for generic events + feedback class RoomSendEventRestServlet(ClientV1RestServlet): def __init__(self, hs): super(RoomSendEventRestServlet, self).__init__(hs) self.handlers = hs.get_handlers() def register(self, http_server): # /rooms/$roomid/send/$event_type[/$txn_id] PATTERNS = ("/rooms/(?P[^/]*)/send/(?P[^/]*)") register_txn_path(self, PATTERNS, http_server, with_get=True) @defer.inlineCallbacks def on_POST(self, request, room_id, event_type, txn_id=None): requester = yield self.auth.get_user_by_req(request, allow_guest=True) content = parse_json_object_from_request(request) msg_handler = self.handlers.message_handler event = yield msg_handler.create_and_send_nonmember_event( requester, { "type": event_type, "content": content, "room_id": room_id, "sender": requester.user.to_string(), }, txn_id=txn_id, ) defer.returnValue((200, {"event_id": event.event_id})) def on_GET(self, request, room_id, event_type, txn_id): return (200, "Not implemented") def on_PUT(self, request, room_id, event_type, txn_id): return self.txns.fetch_or_execute_request( request, self.on_POST, request, room_id, event_type, txn_id ) # TODO: Needs unit testing for room ID + alias joins class JoinRoomAliasServlet(ClientV1RestServlet): def __init__(self, hs): super(JoinRoomAliasServlet, self).__init__(hs) self.handlers = hs.get_handlers() def register(self, http_server): # /join/$room_identifier[/$txn_id] PATTERNS = ("/join/(?P[^/]*)") register_txn_path(self, PATTERNS, http_server) @defer.inlineCallbacks def on_POST(self, request, room_identifier, txn_id=None): requester = yield self.auth.get_user_by_req( request, allow_guest=True, ) try: content = parse_json_object_from_request(request) except: # Turns out we used to ignore the body entirely, and some clients # cheekily send invalid bodies. content = {} if RoomID.is_valid(room_identifier): room_id = room_identifier try: remote_room_hosts = request.args["server_name"] except: remote_room_hosts = None elif RoomAlias.is_valid(room_identifier): handler = self.handlers.room_member_handler room_alias = RoomAlias.from_string(room_identifier) room_id, remote_room_hosts = yield handler.lookup_room_alias(room_alias) room_id = room_id.to_string() else: raise SynapseError(400, "%s was not legal room ID or room alias" % ( room_identifier, )) yield self.handlers.room_member_handler.update_membership( requester=requester, target=requester.user, room_id=room_id, action="join", txn_id=txn_id, remote_room_hosts=remote_room_hosts, content=content, third_party_signed=content.get("third_party_signed", None), ) defer.returnValue((200, {"room_id": room_id})) def on_PUT(self, request, room_identifier, txn_id): return self.txns.fetch_or_execute_request( request, self.on_POST, request, room_identifier, txn_id ) # TODO: Needs unit testing class PublicRoomListRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/publicRooms$") @defer.inlineCallbacks def on_GET(self, request): server = parse_string(request, "server", default=None) try: yield self.auth.get_user_by_req(request, allow_guest=True) except AuthError as e: # We allow people to not be authed if they're just looking at our # room list, but require auth when we proxy the request. # In both cases we call the auth function, as that has the side # effect of logging who issued this request if an access token was # provided. if server: raise e else: pass limit = parse_integer(request, "limit", 0) since_token = parse_string(request, "since", None) handler = self.hs.get_room_list_handler() if server: data = yield handler.get_remote_public_room_list( server, limit=limit, since_token=since_token, ) else: data = yield handler.get_local_public_room_list( limit=limit, since_token=since_token, ) defer.returnValue((200, data)) @defer.inlineCallbacks def on_POST(self, request): yield self.auth.get_user_by_req(request, allow_guest=True) server = parse_string(request, "server", default=None) content = parse_json_object_from_request(request) limit = int(content.get("limit", 100)) since_token = content.get("since", None) search_filter = content.get("filter", None) include_all_networks = content.get("include_all_networks", False) third_party_instance_id = content.get("third_party_instance_id", None) if include_all_networks: network_tuple = None if third_party_instance_id is not None: raise SynapseError( 400, "Can't use include_all_networks with an explicit network" ) elif third_party_instance_id is None: network_tuple = ThirdPartyInstanceID(None, None) else: network_tuple = ThirdPartyInstanceID.from_string(third_party_instance_id) handler = self.hs.get_room_list_handler() if server: data = yield handler.get_remote_public_room_list( server, limit=limit, since_token=since_token, search_filter=search_filter, include_all_networks=include_all_networks, third_party_instance_id=third_party_instance_id, ) else: data = yield handler.get_local_public_room_list( limit=limit, since_token=since_token, search_filter=search_filter, network_tuple=network_tuple, ) defer.returnValue((200, data)) # TODO: Needs unit testing class RoomMemberListRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/rooms/(?P[^/]*)/members$") def __init__(self, hs): super(RoomMemberListRestServlet, self).__init__(hs) self.handlers = hs.get_handlers() @defer.inlineCallbacks def on_GET(self, request, room_id): # TODO support Pagination stream API (limit/tokens) requester = yield self.auth.get_user_by_req(request) handler = self.handlers.message_handler events = yield handler.get_state_events( room_id=room_id, user_id=requester.user.to_string(), ) chunk = [] for event in events: if event["type"] != EventTypes.Member: continue chunk.append(event) defer.returnValue((200, { "chunk": chunk })) class JoinedRoomMemberListRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/rooms/(?P[^/]*)/joined_members$") def __init__(self, hs): super(JoinedRoomMemberListRestServlet, self).__init__(hs) self.message_handler = hs.get_handlers().message_handler @defer.inlineCallbacks def on_GET(self, request, room_id): requester = yield self.auth.get_user_by_req(request) users_with_profile = yield self.message_handler.get_joined_members( requester, room_id, ) defer.returnValue((200, { "joined": users_with_profile, })) # TODO: Needs better unit testing class RoomMessageListRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/rooms/(?P[^/]*)/messages$") def __init__(self, hs): super(RoomMessageListRestServlet, self).__init__(hs) self.handlers = hs.get_handlers() @defer.inlineCallbacks def on_GET(self, request, room_id): requester = yield self.auth.get_user_by_req(request, allow_guest=True) pagination_config = PaginationConfig.from_request( request, default_limit=10, ) as_client_event = "raw" not in request.args filter_bytes = request.args.get("filter", None) if filter_bytes: filter_json = urllib.unquote(filter_bytes[-1]).decode("UTF-8") event_filter = Filter(json.loads(filter_json)) else: event_filter = None handler = self.handlers.message_handler msgs = yield handler.get_messages( room_id=room_id, requester=requester, pagin_config=pagination_config, as_client_event=as_client_event, event_filter=event_filter, ) defer.returnValue((200, msgs)) # TODO: Needs unit testing class RoomStateRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/rooms/(?P[^/]*)/state$") def __init__(self, hs): super(RoomStateRestServlet, self).__init__(hs) self.handlers = hs.get_handlers() @defer.inlineCallbacks def on_GET(self, request, room_id): requester = yield self.auth.get_user_by_req(request, allow_guest=True) handler = self.handlers.message_handler # Get all the current state for this room events = yield handler.get_state_events( room_id=room_id, user_id=requester.user.to_string(), is_guest=requester.is_guest, ) defer.returnValue((200, events)) # TODO: Needs unit testing class RoomInitialSyncRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/rooms/(?P[^/]*)/initialSync$") def __init__(self, hs): super(RoomInitialSyncRestServlet, self).__init__(hs) self.initial_sync_handler = hs.get_initial_sync_handler() @defer.inlineCallbacks def on_GET(self, request, room_id): requester = yield self.auth.get_user_by_req(request, allow_guest=True) pagination_config = PaginationConfig.from_request(request) content = yield self.initial_sync_handler.room_initial_sync( room_id=room_id, requester=requester, pagin_config=pagination_config, ) defer.returnValue((200, content)) class RoomEventContext(ClientV1RestServlet): PATTERNS = client_path_patterns( "/rooms/(?P[^/]*)/context/(?P[^/]*)$" ) def __init__(self, hs): super(RoomEventContext, self).__init__(hs) self.clock = hs.get_clock() self.handlers = hs.get_handlers() @defer.inlineCallbacks def on_GET(self, request, room_id, event_id): requester = yield self.auth.get_user_by_req(request, allow_guest=True) limit = int(request.args.get("limit", [10])[0]) results = yield self.handlers.room_context_handler.get_event_context( requester.user, room_id, event_id, limit, ) if not results: raise SynapseError( 404, "Event not found.", errcode=Codes.NOT_FOUND ) time_now = self.clock.time_msec() results["events_before"] = [ serialize_event(event, time_now) for event in results["events_before"] ] results["event"] = serialize_event(results["event"], time_now) results["events_after"] = [ serialize_event(event, time_now) for event in results["events_after"] ] results["state"] = [ serialize_event(event, time_now) for event in results["state"] ] defer.returnValue((200, results)) class RoomForgetRestServlet(ClientV1RestServlet): def __init__(self, hs): super(RoomForgetRestServlet, self).__init__(hs) self.handlers = hs.get_handlers() def register(self, http_server): PATTERNS = ("/rooms/(?P[^/]*)/forget") register_txn_path(self, PATTERNS, http_server) @defer.inlineCallbacks def on_POST(self, request, room_id, txn_id=None): requester = yield self.auth.get_user_by_req( request, allow_guest=False, ) yield self.handlers.room_member_handler.forget( user=requester.user, room_id=room_id, ) defer.returnValue((200, {})) def on_PUT(self, request, room_id, txn_id): return self.txns.fetch_or_execute_request( request, self.on_POST, request, room_id, txn_id ) # TODO: Needs unit testing class RoomMembershipRestServlet(ClientV1RestServlet): def __init__(self, hs): super(RoomMembershipRestServlet, self).__init__(hs) self.handlers = hs.get_handlers() def register(self, http_server): # /rooms/$roomid/[invite|join|leave] PATTERNS = ("/rooms/(?P[^/]*)/" "(?Pjoin|invite|leave|ban|unban|kick|forget)") register_txn_path(self, PATTERNS, http_server) @defer.inlineCallbacks def on_POST(self, request, room_id, membership_action, txn_id=None): requester = yield self.auth.get_user_by_req( request, allow_guest=True, ) if requester.is_guest and membership_action not in { Membership.JOIN, Membership.LEAVE }: raise AuthError(403, "Guest access not allowed") try: content = parse_json_object_from_request(request) except: # Turns out we used to ignore the body entirely, and some clients # cheekily send invalid bodies. content = {} if membership_action == "invite" and self._has_3pid_invite_keys(content): yield self.handlers.room_member_handler.do_3pid_invite( room_id, requester.user, content["medium"], content["address"], content["id_server"], requester, txn_id ) defer.returnValue((200, {})) return target = requester.user if membership_action in ["invite", "ban", "unban", "kick"]: if "user_id" not in content: raise SynapseError(400, "Missing user_id key.") target = UserID.from_string(content["user_id"]) event_content = None if 'reason' in content and membership_action in ['kick', 'ban']: event_content = {'reason': content['reason']} yield self.handlers.room_member_handler.update_membership( requester=requester, target=target, room_id=room_id, action=membership_action, txn_id=txn_id, third_party_signed=content.get("third_party_signed", None), content=event_content, ) defer.returnValue((200, {})) def _has_3pid_invite_keys(self, content): for key in {"id_server", "medium", "address"}: if key not in content: return False return True def on_PUT(self, request, room_id, membership_action, txn_id): return self.txns.fetch_or_execute_request( request, self.on_POST, request, room_id, membership_action, txn_id ) class RoomRedactEventRestServlet(ClientV1RestServlet): def __init__(self, hs): super(RoomRedactEventRestServlet, self).__init__(hs) self.handlers = hs.get_handlers() def register(self, http_server): PATTERNS = ("/rooms/(?P[^/]*)/redact/(?P[^/]*)") register_txn_path(self, PATTERNS, http_server) @defer.inlineCallbacks def on_POST(self, request, room_id, event_id, txn_id=None): requester = yield self.auth.get_user_by_req(request) content = parse_json_object_from_request(request) msg_handler = self.handlers.message_handler event = yield msg_handler.create_and_send_nonmember_event( requester, { "type": EventTypes.Redaction, "content": content, "room_id": room_id, "sender": requester.user.to_string(), "redacts": event_id, }, txn_id=txn_id, ) defer.returnValue((200, {"event_id": event.event_id})) def on_PUT(self, request, room_id, event_id, txn_id): return self.txns.fetch_or_execute_request( request, self.on_POST, request, room_id, event_id, txn_id ) class RoomTypingRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns( "/rooms/(?P[^/]*)/typing/(?P[^/]*)$" ) def __init__(self, hs): super(RoomTypingRestServlet, self).__init__(hs) self.presence_handler = hs.get_presence_handler() self.typing_handler = hs.get_typing_handler() @defer.inlineCallbacks def on_PUT(self, request, room_id, user_id): requester = yield self.auth.get_user_by_req(request) room_id = urllib.unquote(room_id) target_user = UserID.from_string(urllib.unquote(user_id)) content = parse_json_object_from_request(request) yield self.presence_handler.bump_presence_active_time(requester.user) # Limit timeout to stop people from setting silly typing timeouts. timeout = min(content.get("timeout", 30000), 120000) if content["typing"]: yield self.typing_handler.started_typing( target_user=target_user, auth_user=requester.user, room_id=room_id, timeout=timeout, ) else: yield self.typing_handler.stopped_typing( target_user=target_user, auth_user=requester.user, room_id=room_id, ) defer.returnValue((200, {})) class SearchRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns( "/search$" ) def __init__(self, hs): super(SearchRestServlet, self).__init__(hs) self.handlers = hs.get_handlers() @defer.inlineCallbacks def on_POST(self, request): requester = yield self.auth.get_user_by_req(request) content = parse_json_object_from_request(request) batch = request.args.get("next_batch", [None])[0] results = yield self.handlers.search_handler.search( requester.user, content, batch, ) defer.returnValue((200, results)) class JoinedRoomsRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/joined_rooms$") def __init__(self, hs): super(JoinedRoomsRestServlet, self).__init__(hs) self.store = hs.get_datastore() @defer.inlineCallbacks def on_GET(self, request): requester = yield self.auth.get_user_by_req(request, allow_guest=True) room_ids = yield self.store.get_rooms_for_user(requester.user.to_string()) defer.returnValue((200, {"joined_rooms": list(room_ids)})) def register_txn_path(servlet, regex_string, http_server, with_get=False): """Registers a transaction-based path. This registers two paths: PUT regex_string/$txnid POST regex_string Args: regex_string (str): The regex string to register. Must NOT have a trailing $ as this string will be appended to. http_server : The http_server to register paths with. with_get: True to also register respective GET paths for the PUTs. """ http_server.register_paths( "POST", client_path_patterns(regex_string + "$"), servlet.on_POST ) http_server.register_paths( "PUT", client_path_patterns(regex_string + "/(?P[^/]*)$"), servlet.on_PUT ) if with_get: http_server.register_paths( "GET", client_path_patterns(regex_string + "/(?P[^/]*)$"), servlet.on_GET ) def register_servlets(hs, http_server): RoomStateEventRestServlet(hs).register(http_server) RoomCreateRestServlet(hs).register(http_server) RoomMemberListRestServlet(hs).register(http_server) JoinedRoomMemberListRestServlet(hs).register(http_server) RoomMessageListRestServlet(hs).register(http_server) JoinRoomAliasServlet(hs).register(http_server) RoomForgetRestServlet(hs).register(http_server) RoomMembershipRestServlet(hs).register(http_server) RoomSendEventRestServlet(hs).register(http_server) PublicRoomListRestServlet(hs).register(http_server) RoomStateRestServlet(hs).register(http_server) RoomInitialSyncRestServlet(hs).register(http_server) RoomRedactEventRestServlet(hs).register(http_server) RoomTypingRestServlet(hs).register(http_server) SearchRestServlet(hs).register(http_server) JoinedRoomsRestServlet(hs).register(http_server) RoomEventContext(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v1/voip.py000066400000000000000000000044741317335640100210650ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from .base import ClientV1RestServlet, client_path_patterns import hmac import hashlib import base64 class VoipRestServlet(ClientV1RestServlet): PATTERNS = client_path_patterns("/voip/turnServer$") @defer.inlineCallbacks def on_GET(self, request): requester = yield self.auth.get_user_by_req( request, self.hs.config.turn_allow_guests ) turnUris = self.hs.config.turn_uris turnSecret = self.hs.config.turn_shared_secret turnUsername = self.hs.config.turn_username turnPassword = self.hs.config.turn_password userLifetime = self.hs.config.turn_user_lifetime if turnUris and turnSecret and userLifetime: expiry = (self.hs.get_clock().time_msec() + userLifetime) / 1000 username = "%d:%s" % (expiry, requester.user.to_string()) mac = hmac.new(turnSecret, msg=username, digestmod=hashlib.sha1) # We need to use standard padded base64 encoding here # encode_base64 because we need to add the standard padding to get the # same result as the TURN server. password = base64.b64encode(mac.digest()) elif turnUris and turnUsername and turnPassword and userLifetime: username = turnUsername password = turnPassword else: defer.returnValue((200, {})) defer.returnValue((200, { 'username': username, 'password': password, 'ttl': userLifetime / 1000, 'uris': turnUris, })) def on_OPTIONS(self, request): return (200, {}) def register_servlets(hs, http_server): VoipRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/000077500000000000000000000000001317335640100206735ustar00rootroot00000000000000synapse-0.24.0/synapse/rest/client/v2_alpha/__init__.py000066400000000000000000000011371317335640100230060ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. synapse-0.24.0/synapse/rest/client/v2_alpha/_base.py000066400000000000000000000037741317335640100223310ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module contains base REST classes for constructing client v1 servlets. """ from synapse.api.urls import CLIENT_V2_ALPHA_PREFIX import re import logging logger = logging.getLogger(__name__) def client_v2_patterns(path_regex, releases=(0,), v2_alpha=True, unstable=True): """Creates a regex compiled client path with the correct client path prefix. Args: path_regex (str): The regex string to match. This should NOT have a ^ as this will be prefixed. Returns: SRE_Pattern """ patterns = [] if v2_alpha: patterns.append(re.compile("^" + CLIENT_V2_ALPHA_PREFIX + path_regex)) if unstable: unstable_prefix = CLIENT_V2_ALPHA_PREFIX.replace("/v2_alpha", "/unstable") patterns.append(re.compile("^" + unstable_prefix + path_regex)) for release in releases: new_prefix = CLIENT_V2_ALPHA_PREFIX.replace("/v2_alpha", "/r%d" % release) patterns.append(re.compile("^" + new_prefix + path_regex)) return patterns def set_timeline_upper_limit(filter_json, filter_timeline_limit): if filter_timeline_limit < 0: return # no upper limits timeline = filter_json.get('room', {}).get('timeline', {}) if 'limit' in timeline: filter_json['room']['timeline']["limit"] = min( filter_json['room']['timeline']['limit'], filter_timeline_limit) synapse-0.24.0/synapse/rest/client/v2_alpha/account.py000066400000000000000000000316111317335640100227030ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.constants import LoginType from synapse.api.errors import LoginError, SynapseError, Codes from synapse.http.servlet import ( RestServlet, parse_json_object_from_request, assert_params_in_request ) from synapse.util.async import run_on_reactor from synapse.util.msisdn import phone_number_to_msisdn from ._base import client_v2_patterns import logging logger = logging.getLogger(__name__) class EmailPasswordRequestTokenRestServlet(RestServlet): PATTERNS = client_v2_patterns("/account/password/email/requestToken$") def __init__(self, hs): super(EmailPasswordRequestTokenRestServlet, self).__init__() self.hs = hs self.identity_handler = hs.get_handlers().identity_handler @defer.inlineCallbacks def on_POST(self, request): body = parse_json_object_from_request(request) assert_params_in_request(body, [ 'id_server', 'client_secret', 'email', 'send_attempt' ]) existingUid = yield self.hs.get_datastore().get_user_id_by_threepid( 'email', body['email'] ) if existingUid is None: raise SynapseError(400, "Email not found", Codes.THREEPID_NOT_FOUND) ret = yield self.identity_handler.requestEmailToken(**body) defer.returnValue((200, ret)) class MsisdnPasswordRequestTokenRestServlet(RestServlet): PATTERNS = client_v2_patterns("/account/password/msisdn/requestToken$") def __init__(self, hs): super(MsisdnPasswordRequestTokenRestServlet, self).__init__() self.hs = hs self.datastore = self.hs.get_datastore() self.identity_handler = hs.get_handlers().identity_handler @defer.inlineCallbacks def on_POST(self, request): body = parse_json_object_from_request(request) assert_params_in_request(body, [ 'id_server', 'client_secret', 'country', 'phone_number', 'send_attempt', ]) msisdn = phone_number_to_msisdn(body['country'], body['phone_number']) existingUid = yield self.datastore.get_user_id_by_threepid( 'msisdn', msisdn ) if existingUid is None: raise SynapseError(400, "MSISDN not found", Codes.THREEPID_NOT_FOUND) ret = yield self.identity_handler.requestMsisdnToken(**body) defer.returnValue((200, ret)) class PasswordRestServlet(RestServlet): PATTERNS = client_v2_patterns("/account/password$") def __init__(self, hs): super(PasswordRestServlet, self).__init__() self.hs = hs self.auth = hs.get_auth() self.auth_handler = hs.get_auth_handler() self.datastore = self.hs.get_datastore() @defer.inlineCallbacks def on_POST(self, request): yield run_on_reactor() body = parse_json_object_from_request(request) authed, result, params, _ = yield self.auth_handler.check_auth([ [LoginType.PASSWORD], [LoginType.EMAIL_IDENTITY], [LoginType.MSISDN], ], body, self.hs.get_ip_from_request(request)) if not authed: defer.returnValue((401, result)) user_id = None requester = None if LoginType.PASSWORD in result: # if using password, they should also be logged in requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() if user_id != result[LoginType.PASSWORD]: raise LoginError(400, "", Codes.UNKNOWN) elif LoginType.EMAIL_IDENTITY in result: threepid = result[LoginType.EMAIL_IDENTITY] if 'medium' not in threepid or 'address' not in threepid: raise SynapseError(500, "Malformed threepid") if threepid['medium'] == 'email': # For emails, transform the address to lowercase. # We store all email addreses as lowercase in the DB. # (See add_threepid in synapse/handlers/auth.py) threepid['address'] = threepid['address'].lower() # if using email, we must know about the email they're authing with! threepid_user_id = yield self.datastore.get_user_id_by_threepid( threepid['medium'], threepid['address'] ) if not threepid_user_id: raise SynapseError(404, "Email address not found", Codes.NOT_FOUND) user_id = threepid_user_id else: logger.error("Auth succeeded but no known type!", result.keys()) raise SynapseError(500, "", Codes.UNKNOWN) if 'new_password' not in params: raise SynapseError(400, "", Codes.MISSING_PARAM) new_password = params['new_password'] yield self.auth_handler.set_password( user_id, new_password, requester ) defer.returnValue((200, {})) def on_OPTIONS(self, _): return 200, {} class DeactivateAccountRestServlet(RestServlet): PATTERNS = client_v2_patterns("/account/deactivate$") def __init__(self, hs): self.hs = hs self.store = hs.get_datastore() self.auth = hs.get_auth() self.auth_handler = hs.get_auth_handler() super(DeactivateAccountRestServlet, self).__init__() @defer.inlineCallbacks def on_POST(self, request): body = parse_json_object_from_request(request) authed, result, params, _ = yield self.auth_handler.check_auth([ [LoginType.PASSWORD], ], body, self.hs.get_ip_from_request(request)) if not authed: defer.returnValue((401, result)) user_id = None requester = None if LoginType.PASSWORD in result: # if using password, they should also be logged in requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() if user_id != result[LoginType.PASSWORD]: raise LoginError(400, "", Codes.UNKNOWN) else: logger.error("Auth succeeded but no known type!", result.keys()) raise SynapseError(500, "", Codes.UNKNOWN) # FIXME: Theoretically there is a race here wherein user resets password # using threepid. yield self.store.user_delete_access_tokens(user_id) yield self.store.user_delete_threepids(user_id) yield self.store.user_set_password_hash(user_id, None) defer.returnValue((200, {})) class EmailThreepidRequestTokenRestServlet(RestServlet): PATTERNS = client_v2_patterns("/account/3pid/email/requestToken$") def __init__(self, hs): self.hs = hs super(EmailThreepidRequestTokenRestServlet, self).__init__() self.identity_handler = hs.get_handlers().identity_handler self.datastore = self.hs.get_datastore() @defer.inlineCallbacks def on_POST(self, request): body = parse_json_object_from_request(request) required = ['id_server', 'client_secret', 'email', 'send_attempt'] absent = [] for k in required: if k not in body: absent.append(k) if absent: raise SynapseError(400, "Missing params: %r" % absent, Codes.MISSING_PARAM) existingUid = yield self.datastore.get_user_id_by_threepid( 'email', body['email'] ) if existingUid is not None: raise SynapseError(400, "Email is already in use", Codes.THREEPID_IN_USE) ret = yield self.identity_handler.requestEmailToken(**body) defer.returnValue((200, ret)) class MsisdnThreepidRequestTokenRestServlet(RestServlet): PATTERNS = client_v2_patterns("/account/3pid/msisdn/requestToken$") def __init__(self, hs): self.hs = hs super(MsisdnThreepidRequestTokenRestServlet, self).__init__() self.identity_handler = hs.get_handlers().identity_handler self.datastore = self.hs.get_datastore() @defer.inlineCallbacks def on_POST(self, request): body = parse_json_object_from_request(request) required = [ 'id_server', 'client_secret', 'country', 'phone_number', 'send_attempt', ] absent = [] for k in required: if k not in body: absent.append(k) if absent: raise SynapseError(400, "Missing params: %r" % absent, Codes.MISSING_PARAM) msisdn = phone_number_to_msisdn(body['country'], body['phone_number']) existingUid = yield self.datastore.get_user_id_by_threepid( 'msisdn', msisdn ) if existingUid is not None: raise SynapseError(400, "MSISDN is already in use", Codes.THREEPID_IN_USE) ret = yield self.identity_handler.requestMsisdnToken(**body) defer.returnValue((200, ret)) class ThreepidRestServlet(RestServlet): PATTERNS = client_v2_patterns("/account/3pid$") def __init__(self, hs): super(ThreepidRestServlet, self).__init__() self.hs = hs self.identity_handler = hs.get_handlers().identity_handler self.auth = hs.get_auth() self.auth_handler = hs.get_auth_handler() self.datastore = self.hs.get_datastore() @defer.inlineCallbacks def on_GET(self, request): yield run_on_reactor() requester = yield self.auth.get_user_by_req(request) threepids = yield self.datastore.user_get_threepids( requester.user.to_string() ) defer.returnValue((200, {'threepids': threepids})) @defer.inlineCallbacks def on_POST(self, request): yield run_on_reactor() body = parse_json_object_from_request(request) threePidCreds = body.get('threePidCreds') threePidCreds = body.get('three_pid_creds', threePidCreds) if threePidCreds is None: raise SynapseError(400, "Missing param", Codes.MISSING_PARAM) requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() threepid = yield self.identity_handler.threepid_from_creds(threePidCreds) if not threepid: raise SynapseError( 400, "Failed to auth 3pid", Codes.THREEPID_AUTH_FAILED ) for reqd in ['medium', 'address', 'validated_at']: if reqd not in threepid: logger.warn("Couldn't add 3pid: invalid response from ID server") raise SynapseError(500, "Invalid response from ID Server") yield self.auth_handler.add_threepid( user_id, threepid['medium'], threepid['address'], threepid['validated_at'], ) if 'bind' in body and body['bind']: logger.debug( "Binding threepid %s to %s", threepid, user_id ) yield self.identity_handler.bind_threepid( threePidCreds, user_id ) defer.returnValue((200, {})) class ThreepidDeleteRestServlet(RestServlet): PATTERNS = client_v2_patterns("/account/3pid/delete$", releases=()) def __init__(self, hs): super(ThreepidDeleteRestServlet, self).__init__() self.auth = hs.get_auth() self.auth_handler = hs.get_auth_handler() @defer.inlineCallbacks def on_POST(self, request): yield run_on_reactor() body = parse_json_object_from_request(request) required = ['medium', 'address'] absent = [] for k in required: if k not in body: absent.append(k) if absent: raise SynapseError(400, "Missing params: %r" % absent, Codes.MISSING_PARAM) requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() yield self.auth_handler.delete_threepid( user_id, body['medium'], body['address'] ) defer.returnValue((200, {})) def register_servlets(hs, http_server): EmailPasswordRequestTokenRestServlet(hs).register(http_server) MsisdnPasswordRequestTokenRestServlet(hs).register(http_server) PasswordRestServlet(hs).register(http_server) DeactivateAccountRestServlet(hs).register(http_server) EmailThreepidRequestTokenRestServlet(hs).register(http_server) MsisdnThreepidRequestTokenRestServlet(hs).register(http_server) ThreepidRestServlet(hs).register(http_server) ThreepidDeleteRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/account_data.py000066400000000000000000000064771317335640100237100ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import client_v2_patterns from synapse.http.servlet import RestServlet, parse_json_object_from_request from synapse.api.errors import AuthError, SynapseError from twisted.internet import defer import logging logger = logging.getLogger(__name__) class AccountDataServlet(RestServlet): """ PUT /user/{user_id}/account_data/{account_dataType} HTTP/1.1 """ PATTERNS = client_v2_patterns( "/user/(?P[^/]*)/account_data/(?P[^/]*)" ) def __init__(self, hs): super(AccountDataServlet, self).__init__() self.auth = hs.get_auth() self.store = hs.get_datastore() self.notifier = hs.get_notifier() @defer.inlineCallbacks def on_PUT(self, request, user_id, account_data_type): requester = yield self.auth.get_user_by_req(request) if user_id != requester.user.to_string(): raise AuthError(403, "Cannot add account data for other users.") body = parse_json_object_from_request(request) max_id = yield self.store.add_account_data_for_user( user_id, account_data_type, body ) self.notifier.on_new_event( "account_data_key", max_id, users=[user_id] ) defer.returnValue((200, {})) class RoomAccountDataServlet(RestServlet): """ PUT /user/{user_id}/rooms/{room_id}/account_data/{account_dataType} HTTP/1.1 """ PATTERNS = client_v2_patterns( "/user/(?P[^/]*)" "/rooms/(?P[^/]*)" "/account_data/(?P[^/]*)" ) def __init__(self, hs): super(RoomAccountDataServlet, self).__init__() self.auth = hs.get_auth() self.store = hs.get_datastore() self.notifier = hs.get_notifier() @defer.inlineCallbacks def on_PUT(self, request, user_id, room_id, account_data_type): requester = yield self.auth.get_user_by_req(request) if user_id != requester.user.to_string(): raise AuthError(403, "Cannot add account data for other users.") body = parse_json_object_from_request(request) if account_data_type == "m.fully_read": raise SynapseError( 405, "Cannot set m.fully_read through this API." " Use /rooms/!roomId:server.name/read_markers" ) max_id = yield self.store.add_account_data_to_room( user_id, room_id, account_data_type, body ) self.notifier.on_new_event( "account_data_key", max_id, users=[user_id] ) defer.returnValue((200, {})) def register_servlets(hs, http_server): AccountDataServlet(hs).register(http_server) RoomAccountDataServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/auth.py000066400000000000000000000143261317335640100222140ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.constants import LoginType from synapse.api.errors import SynapseError from synapse.api.urls import CLIENT_V2_ALPHA_PREFIX from synapse.http.server import finish_request from synapse.http.servlet import RestServlet from ._base import client_v2_patterns import logging logger = logging.getLogger(__name__) RECAPTCHA_TEMPLATE = """ Authentication

Hello! We need to prevent computer programs and other automated things from creating accounts on this server.

Please verify that you're not a robot.

""" SUCCESS_TEMPLATE = """ Success!

Thank you

You may now close this window and return to the application

""" class AuthRestServlet(RestServlet): """ Handles Client / Server API authentication in any situations where it cannot be handled in the normal flow (with requests to the same endpoint). Current use is for web fallback auth. """ PATTERNS = client_v2_patterns("/auth/(?P[\w\.]*)/fallback/web") def __init__(self, hs): super(AuthRestServlet, self).__init__() self.hs = hs self.auth = hs.get_auth() self.auth_handler = hs.get_auth_handler() self.registration_handler = hs.get_handlers().registration_handler @defer.inlineCallbacks def on_GET(self, request, stagetype): yield if stagetype == LoginType.RECAPTCHA: if ('session' not in request.args or len(request.args['session']) == 0): raise SynapseError(400, "No session supplied") session = request.args["session"][0] html = RECAPTCHA_TEMPLATE % { 'session': session, 'myurl': "%s/auth/%s/fallback/web" % ( CLIENT_V2_ALPHA_PREFIX, LoginType.RECAPTCHA ), 'sitekey': self.hs.config.recaptcha_public_key, } html_bytes = html.encode("utf8") request.setResponseCode(200) request.setHeader(b"Content-Type", b"text/html; charset=utf-8") request.setHeader(b"Server", self.hs.version_string) request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),)) request.write(html_bytes) finish_request(request) defer.returnValue(None) else: raise SynapseError(404, "Unknown auth stage type") @defer.inlineCallbacks def on_POST(self, request, stagetype): yield if stagetype == "m.login.recaptcha": if ('g-recaptcha-response' not in request.args or len(request.args['g-recaptcha-response'])) == 0: raise SynapseError(400, "No captcha response supplied") if ('session' not in request.args or len(request.args['session'])) == 0: raise SynapseError(400, "No session supplied") session = request.args['session'][0] authdict = { 'response': request.args['g-recaptcha-response'][0], 'session': session, } success = yield self.auth_handler.add_oob_auth( LoginType.RECAPTCHA, authdict, self.hs.get_ip_from_request(request) ) if success: html = SUCCESS_TEMPLATE else: html = RECAPTCHA_TEMPLATE % { 'session': session, 'myurl': "%s/auth/%s/fallback/web" % ( CLIENT_V2_ALPHA_PREFIX, LoginType.RECAPTCHA ), 'sitekey': self.hs.config.recaptcha_public_key, } html_bytes = html.encode("utf8") request.setResponseCode(200) request.setHeader(b"Content-Type", b"text/html; charset=utf-8") request.setHeader(b"Server", self.hs.version_string) request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),)) request.write(html_bytes) finish_request(request) defer.returnValue(None) else: raise SynapseError(404, "Unknown auth stage type") def on_OPTIONS(self, _): return 200, {} def register_servlets(hs, http_server): AuthRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/devices.py000066400000000000000000000125271317335640100226760ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from twisted.internet import defer from synapse.api import constants, errors from synapse.http import servlet from ._base import client_v2_patterns logger = logging.getLogger(__name__) class DevicesRestServlet(servlet.RestServlet): PATTERNS = client_v2_patterns("/devices$", releases=[], v2_alpha=False) def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): server """ super(DevicesRestServlet, self).__init__() self.hs = hs self.auth = hs.get_auth() self.device_handler = hs.get_device_handler() @defer.inlineCallbacks def on_GET(self, request): requester = yield self.auth.get_user_by_req(request, allow_guest=True) devices = yield self.device_handler.get_devices_by_user( requester.user.to_string() ) defer.returnValue((200, {"devices": devices})) class DeleteDevicesRestServlet(servlet.RestServlet): """ API for bulk deletion of devices. Accepts a JSON object with a devices key which lists the device_ids to delete. Requires user interactive auth. """ PATTERNS = client_v2_patterns("/delete_devices", releases=[], v2_alpha=False) def __init__(self, hs): super(DeleteDevicesRestServlet, self).__init__() self.hs = hs self.auth = hs.get_auth() self.device_handler = hs.get_device_handler() self.auth_handler = hs.get_auth_handler() @defer.inlineCallbacks def on_POST(self, request): try: body = servlet.parse_json_object_from_request(request) except errors.SynapseError as e: if e.errcode == errors.Codes.NOT_JSON: # deal with older clients which didn't pass a J*DELETESON dict # the same as those that pass an empty dict body = {} else: raise e if 'devices' not in body: raise errors.SynapseError( 400, "No devices supplied", errcode=errors.Codes.MISSING_PARAM ) authed, result, params, _ = yield self.auth_handler.check_auth([ [constants.LoginType.PASSWORD], ], body, self.hs.get_ip_from_request(request)) if not authed: defer.returnValue((401, result)) requester = yield self.auth.get_user_by_req(request) yield self.device_handler.delete_devices( requester.user.to_string(), body['devices'], ) defer.returnValue((200, {})) class DeviceRestServlet(servlet.RestServlet): PATTERNS = client_v2_patterns("/devices/(?P[^/]*)$", releases=[], v2_alpha=False) def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): server """ super(DeviceRestServlet, self).__init__() self.hs = hs self.auth = hs.get_auth() self.device_handler = hs.get_device_handler() self.auth_handler = hs.get_auth_handler() @defer.inlineCallbacks def on_GET(self, request, device_id): requester = yield self.auth.get_user_by_req(request, allow_guest=True) device = yield self.device_handler.get_device( requester.user.to_string(), device_id, ) defer.returnValue((200, device)) @defer.inlineCallbacks def on_DELETE(self, request, device_id): try: body = servlet.parse_json_object_from_request(request) except errors.SynapseError as e: if e.errcode == errors.Codes.NOT_JSON: # deal with older clients which didn't pass a JSON dict # the same as those that pass an empty dict body = {} else: raise authed, result, params, _ = yield self.auth_handler.check_auth([ [constants.LoginType.PASSWORD], ], body, self.hs.get_ip_from_request(request)) if not authed: defer.returnValue((401, result)) requester = yield self.auth.get_user_by_req(request) yield self.device_handler.delete_device( requester.user.to_string(), device_id, ) defer.returnValue((200, {})) @defer.inlineCallbacks def on_PUT(self, request, device_id): requester = yield self.auth.get_user_by_req(request, allow_guest=True) body = servlet.parse_json_object_from_request(request) yield self.device_handler.update_device( requester.user.to_string(), device_id, body ) defer.returnValue((200, {})) def register_servlets(hs, http_server): DeleteDevicesRestServlet(hs).register(http_server) DevicesRestServlet(hs).register(http_server) DeviceRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/filter.py000066400000000000000000000065661317335640100225470ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.errors import AuthError, SynapseError, StoreError, Codes from synapse.http.servlet import RestServlet, parse_json_object_from_request from synapse.types import UserID from ._base import client_v2_patterns from ._base import set_timeline_upper_limit import logging logger = logging.getLogger(__name__) class GetFilterRestServlet(RestServlet): PATTERNS = client_v2_patterns("/user/(?P[^/]*)/filter/(?P[^/]*)") def __init__(self, hs): super(GetFilterRestServlet, self).__init__() self.hs = hs self.auth = hs.get_auth() self.filtering = hs.get_filtering() @defer.inlineCallbacks def on_GET(self, request, user_id, filter_id): target_user = UserID.from_string(user_id) requester = yield self.auth.get_user_by_req(request) if target_user != requester.user: raise AuthError(403, "Cannot get filters for other users") if not self.hs.is_mine(target_user): raise AuthError(403, "Can only get filters for local users") try: filter_id = int(filter_id) except: raise SynapseError(400, "Invalid filter_id") try: filter = yield self.filtering.get_user_filter( user_localpart=target_user.localpart, filter_id=filter_id, ) defer.returnValue((200, filter.get_filter_json())) except (KeyError, StoreError): raise SynapseError(400, "No such filter", errcode=Codes.NOT_FOUND) class CreateFilterRestServlet(RestServlet): PATTERNS = client_v2_patterns("/user/(?P[^/]*)/filter") def __init__(self, hs): super(CreateFilterRestServlet, self).__init__() self.hs = hs self.auth = hs.get_auth() self.filtering = hs.get_filtering() @defer.inlineCallbacks def on_POST(self, request, user_id): target_user = UserID.from_string(user_id) requester = yield self.auth.get_user_by_req(request) if target_user != requester.user: raise AuthError(403, "Cannot create filters for other users") if not self.hs.is_mine(target_user): raise AuthError(403, "Can only create filters for local users") content = parse_json_object_from_request(request) set_timeline_upper_limit( content, self.hs.config.filter_timeline_limit ) filter_id = yield self.filtering.add_user_filter( user_localpart=target_user.localpart, user_filter=content, ) defer.returnValue((200, {"filter_id": str(filter_id)})) def register_servlets(hs, http_server): GetFilterRestServlet(hs).register(http_server) CreateFilterRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/groups.py000066400000000000000000000545451317335640100226010ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.http.servlet import RestServlet, parse_json_object_from_request from synapse.types import GroupID from ._base import client_v2_patterns import logging logger = logging.getLogger(__name__) class GroupServlet(RestServlet): """Get the group profile """ PATTERNS = client_v2_patterns("/groups/(?P[^/]*)/profile$") def __init__(self, hs): super(GroupServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_GET(self, request, group_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() group_description = yield self.groups_handler.get_group_profile(group_id, user_id) defer.returnValue((200, group_description)) @defer.inlineCallbacks def on_POST(self, request, group_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() content = parse_json_object_from_request(request) yield self.groups_handler.update_group_profile( group_id, user_id, content, ) defer.returnValue((200, {})) class GroupSummaryServlet(RestServlet): """Get the full group summary """ PATTERNS = client_v2_patterns("/groups/(?P[^/]*)/summary$") def __init__(self, hs): super(GroupSummaryServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_GET(self, request, group_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() get_group_summary = yield self.groups_handler.get_group_summary(group_id, user_id) defer.returnValue((200, get_group_summary)) class GroupSummaryRoomsCatServlet(RestServlet): """Update/delete a rooms entry in the summary. Matches both: - /groups/:group/summary/rooms/:room_id - /groups/:group/summary/categories/:category/rooms/:room_id """ PATTERNS = client_v2_patterns( "/groups/(?P[^/]*)/summary" "(/categories/(?P[^/]+))?" "/rooms/(?P[^/]*)$" ) def __init__(self, hs): super(GroupSummaryRoomsCatServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_PUT(self, request, group_id, category_id, room_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() content = parse_json_object_from_request(request) resp = yield self.groups_handler.update_group_summary_room( group_id, user_id, room_id=room_id, category_id=category_id, content=content, ) defer.returnValue((200, resp)) @defer.inlineCallbacks def on_DELETE(self, request, group_id, category_id, room_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() resp = yield self.groups_handler.delete_group_summary_room( group_id, user_id, room_id=room_id, category_id=category_id, ) defer.returnValue((200, resp)) class GroupCategoryServlet(RestServlet): """Get/add/update/delete a group category """ PATTERNS = client_v2_patterns( "/groups/(?P[^/]*)/categories/(?P[^/]+)$" ) def __init__(self, hs): super(GroupCategoryServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_GET(self, request, group_id, category_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() category = yield self.groups_handler.get_group_category( group_id, user_id, category_id=category_id, ) defer.returnValue((200, category)) @defer.inlineCallbacks def on_PUT(self, request, group_id, category_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() content = parse_json_object_from_request(request) resp = yield self.groups_handler.update_group_category( group_id, user_id, category_id=category_id, content=content, ) defer.returnValue((200, resp)) @defer.inlineCallbacks def on_DELETE(self, request, group_id, category_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() resp = yield self.groups_handler.delete_group_category( group_id, user_id, category_id=category_id, ) defer.returnValue((200, resp)) class GroupCategoriesServlet(RestServlet): """Get all group categories """ PATTERNS = client_v2_patterns( "/groups/(?P[^/]*)/categories/$" ) def __init__(self, hs): super(GroupCategoriesServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_GET(self, request, group_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() category = yield self.groups_handler.get_group_categories( group_id, user_id, ) defer.returnValue((200, category)) class GroupRoleServlet(RestServlet): """Get/add/update/delete a group role """ PATTERNS = client_v2_patterns( "/groups/(?P[^/]*)/roles/(?P[^/]+)$" ) def __init__(self, hs): super(GroupRoleServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_GET(self, request, group_id, role_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() category = yield self.groups_handler.get_group_role( group_id, user_id, role_id=role_id, ) defer.returnValue((200, category)) @defer.inlineCallbacks def on_PUT(self, request, group_id, role_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() content = parse_json_object_from_request(request) resp = yield self.groups_handler.update_group_role( group_id, user_id, role_id=role_id, content=content, ) defer.returnValue((200, resp)) @defer.inlineCallbacks def on_DELETE(self, request, group_id, role_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() resp = yield self.groups_handler.delete_group_role( group_id, user_id, role_id=role_id, ) defer.returnValue((200, resp)) class GroupRolesServlet(RestServlet): """Get all group roles """ PATTERNS = client_v2_patterns( "/groups/(?P[^/]*)/roles/$" ) def __init__(self, hs): super(GroupRolesServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_GET(self, request, group_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() category = yield self.groups_handler.get_group_roles( group_id, user_id, ) defer.returnValue((200, category)) class GroupSummaryUsersRoleServlet(RestServlet): """Update/delete a user's entry in the summary. Matches both: - /groups/:group/summary/users/:room_id - /groups/:group/summary/roles/:role/users/:user_id """ PATTERNS = client_v2_patterns( "/groups/(?P[^/]*)/summary" "(/roles/(?P[^/]+))?" "/users/(?P[^/]*)$" ) def __init__(self, hs): super(GroupSummaryUsersRoleServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_PUT(self, request, group_id, role_id, user_id): requester = yield self.auth.get_user_by_req(request) requester_user_id = requester.user.to_string() content = parse_json_object_from_request(request) resp = yield self.groups_handler.update_group_summary_user( group_id, requester_user_id, user_id=user_id, role_id=role_id, content=content, ) defer.returnValue((200, resp)) @defer.inlineCallbacks def on_DELETE(self, request, group_id, role_id, user_id): requester = yield self.auth.get_user_by_req(request) requester_user_id = requester.user.to_string() resp = yield self.groups_handler.delete_group_summary_user( group_id, requester_user_id, user_id=user_id, role_id=role_id, ) defer.returnValue((200, resp)) class GroupRoomServlet(RestServlet): """Get all rooms in a group """ PATTERNS = client_v2_patterns("/groups/(?P[^/]*)/rooms$") def __init__(self, hs): super(GroupRoomServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_GET(self, request, group_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() result = yield self.groups_handler.get_rooms_in_group(group_id, user_id) defer.returnValue((200, result)) class GroupUsersServlet(RestServlet): """Get all users in a group """ PATTERNS = client_v2_patterns("/groups/(?P[^/]*)/users$") def __init__(self, hs): super(GroupUsersServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_GET(self, request, group_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() result = yield self.groups_handler.get_users_in_group(group_id, user_id) defer.returnValue((200, result)) class GroupInvitedUsersServlet(RestServlet): """Get users invited to a group """ PATTERNS = client_v2_patterns("/groups/(?P[^/]*)/invited_users$") def __init__(self, hs): super(GroupInvitedUsersServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_GET(self, request, group_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() result = yield self.groups_handler.get_invited_users_in_group(group_id, user_id) defer.returnValue((200, result)) class GroupCreateServlet(RestServlet): """Create a group """ PATTERNS = client_v2_patterns("/create_group$") def __init__(self, hs): super(GroupCreateServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() self.server_name = hs.hostname @defer.inlineCallbacks def on_POST(self, request): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() # TODO: Create group on remote server content = parse_json_object_from_request(request) localpart = content.pop("localpart") group_id = GroupID.create(localpart, self.server_name).to_string() result = yield self.groups_handler.create_group(group_id, user_id, content) defer.returnValue((200, result)) class GroupAdminRoomsServlet(RestServlet): """Add a room to the group """ PATTERNS = client_v2_patterns( "/groups/(?P[^/]*)/admin/rooms/(?P[^/]*)$" ) def __init__(self, hs): super(GroupAdminRoomsServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_PUT(self, request, group_id, room_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() content = parse_json_object_from_request(request) result = yield self.groups_handler.add_room_to_group( group_id, user_id, room_id, content, ) defer.returnValue((200, result)) @defer.inlineCallbacks def on_DELETE(self, request, group_id, room_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() result = yield self.groups_handler.remove_room_from_group( group_id, user_id, room_id, ) defer.returnValue((200, result)) class GroupAdminUsersInviteServlet(RestServlet): """Invite a user to the group """ PATTERNS = client_v2_patterns( "/groups/(?P[^/]*)/admin/users/invite/(?P[^/]*)$" ) def __init__(self, hs): super(GroupAdminUsersInviteServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() self.store = hs.get_datastore() self.is_mine_id = hs.is_mine_id @defer.inlineCallbacks def on_PUT(self, request, group_id, user_id): requester = yield self.auth.get_user_by_req(request) requester_user_id = requester.user.to_string() content = parse_json_object_from_request(request) config = content.get("config", {}) result = yield self.groups_handler.invite( group_id, user_id, requester_user_id, config, ) defer.returnValue((200, result)) class GroupAdminUsersKickServlet(RestServlet): """Kick a user from the group """ PATTERNS = client_v2_patterns( "/groups/(?P[^/]*)/admin/users/remove/(?P[^/]*)$" ) def __init__(self, hs): super(GroupAdminUsersKickServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_PUT(self, request, group_id, user_id): requester = yield self.auth.get_user_by_req(request) requester_user_id = requester.user.to_string() content = parse_json_object_from_request(request) result = yield self.groups_handler.remove_user_from_group( group_id, user_id, requester_user_id, content, ) defer.returnValue((200, result)) class GroupSelfLeaveServlet(RestServlet): """Leave a joined group """ PATTERNS = client_v2_patterns( "/groups/(?P[^/]*)/self/leave$" ) def __init__(self, hs): super(GroupSelfLeaveServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_PUT(self, request, group_id): requester = yield self.auth.get_user_by_req(request) requester_user_id = requester.user.to_string() content = parse_json_object_from_request(request) result = yield self.groups_handler.remove_user_from_group( group_id, requester_user_id, requester_user_id, content, ) defer.returnValue((200, result)) class GroupSelfJoinServlet(RestServlet): """Attempt to join a group, or knock """ PATTERNS = client_v2_patterns( "/groups/(?P[^/]*)/self/join$" ) def __init__(self, hs): super(GroupSelfJoinServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_PUT(self, request, group_id): requester = yield self.auth.get_user_by_req(request) requester_user_id = requester.user.to_string() content = parse_json_object_from_request(request) result = yield self.groups_handler.join_group( group_id, requester_user_id, content, ) defer.returnValue((200, result)) class GroupSelfAcceptInviteServlet(RestServlet): """Accept a group invite """ PATTERNS = client_v2_patterns( "/groups/(?P[^/]*)/self/accept_invite$" ) def __init__(self, hs): super(GroupSelfAcceptInviteServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_PUT(self, request, group_id): requester = yield self.auth.get_user_by_req(request) requester_user_id = requester.user.to_string() content = parse_json_object_from_request(request) result = yield self.groups_handler.accept_invite( group_id, requester_user_id, content, ) defer.returnValue((200, result)) class GroupSelfUpdatePublicityServlet(RestServlet): """Update whether we publicise a users membership of a group """ PATTERNS = client_v2_patterns( "/groups/(?P[^/]*)/self/update_publicity$" ) def __init__(self, hs): super(GroupSelfUpdatePublicityServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.store = hs.get_datastore() @defer.inlineCallbacks def on_PUT(self, request, group_id): requester = yield self.auth.get_user_by_req(request) requester_user_id = requester.user.to_string() content = parse_json_object_from_request(request) publicise = content["publicise"] yield self.store.update_group_publicity( group_id, requester_user_id, publicise, ) defer.returnValue((200, {})) class PublicisedGroupsForUserServlet(RestServlet): """Get the list of groups a user is advertising """ PATTERNS = client_v2_patterns( "/publicised_groups/(?P[^/]*)$" ) def __init__(self, hs): super(PublicisedGroupsForUserServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.store = hs.get_datastore() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_GET(self, request, user_id): yield self.auth.get_user_by_req(request) result = yield self.groups_handler.get_publicised_groups_for_user( user_id ) defer.returnValue((200, result)) class PublicisedGroupsForUsersServlet(RestServlet): """Get the list of groups a user is advertising """ PATTERNS = client_v2_patterns( "/publicised_groups$" ) def __init__(self, hs): super(PublicisedGroupsForUsersServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.store = hs.get_datastore() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_POST(self, request): yield self.auth.get_user_by_req(request) content = parse_json_object_from_request(request) user_ids = content["user_ids"] result = yield self.groups_handler.bulk_get_publicised_groups( user_ids ) defer.returnValue((200, result)) class GroupsForUserServlet(RestServlet): """Get all groups the logged in user is joined to """ PATTERNS = client_v2_patterns( "/joined_groups$" ) def __init__(self, hs): super(GroupsForUserServlet, self).__init__() self.auth = hs.get_auth() self.clock = hs.get_clock() self.groups_handler = hs.get_groups_local_handler() @defer.inlineCallbacks def on_GET(self, request): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() result = yield self.groups_handler.get_joined_groups(user_id) defer.returnValue((200, result)) def register_servlets(hs, http_server): GroupServlet(hs).register(http_server) GroupSummaryServlet(hs).register(http_server) GroupInvitedUsersServlet(hs).register(http_server) GroupUsersServlet(hs).register(http_server) GroupRoomServlet(hs).register(http_server) GroupCreateServlet(hs).register(http_server) GroupAdminRoomsServlet(hs).register(http_server) GroupAdminUsersInviteServlet(hs).register(http_server) GroupAdminUsersKickServlet(hs).register(http_server) GroupSelfLeaveServlet(hs).register(http_server) GroupSelfJoinServlet(hs).register(http_server) GroupSelfAcceptInviteServlet(hs).register(http_server) GroupsForUserServlet(hs).register(http_server) GroupCategoryServlet(hs).register(http_server) GroupCategoriesServlet(hs).register(http_server) GroupSummaryRoomsCatServlet(hs).register(http_server) GroupRoleServlet(hs).register(http_server) GroupRolesServlet(hs).register(http_server) GroupSelfUpdatePublicityServlet(hs).register(http_server) GroupSummaryUsersRoleServlet(hs).register(http_server) PublicisedGroupsForUserServlet(hs).register(http_server) PublicisedGroupsForUsersServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/keys.py000066400000000000000000000163601317335640100222260ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from twisted.internet import defer from synapse.api.errors import SynapseError from synapse.http.servlet import ( RestServlet, parse_json_object_from_request, parse_integer ) from synapse.http.servlet import parse_string from synapse.types import StreamToken from ._base import client_v2_patterns logger = logging.getLogger(__name__) class KeyUploadServlet(RestServlet): """ POST /keys/upload HTTP/1.1 Content-Type: application/json { "device_keys": { "user_id": "", "device_id": "", "valid_until_ts": , "algorithms": [ "m.olm.curve25519-aes-sha256", ] "keys": { ":": "", }, "signatures:" { "" { ":": "" } } }, "one_time_keys": { ":": "" }, } """ PATTERNS = client_v2_patterns("/keys/upload(/(?P[^/]+))?$", releases=()) def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): server """ super(KeyUploadServlet, self).__init__() self.auth = hs.get_auth() self.e2e_keys_handler = hs.get_e2e_keys_handler() @defer.inlineCallbacks def on_POST(self, request, device_id): requester = yield self.auth.get_user_by_req(request, allow_guest=True) user_id = requester.user.to_string() body = parse_json_object_from_request(request) if device_id is not None: # passing the device_id here is deprecated; however, we allow it # for now for compatibility with older clients. if (requester.device_id is not None and device_id != requester.device_id): logger.warning("Client uploading keys for a different device " "(logged in as %s, uploading for %s)", requester.device_id, device_id) else: device_id = requester.device_id if device_id is None: raise SynapseError( 400, "To upload keys, you must pass device_id when authenticating" ) result = yield self.e2e_keys_handler.upload_keys_for_user( user_id, device_id, body ) defer.returnValue((200, result)) class KeyQueryServlet(RestServlet): """ POST /keys/query HTTP/1.1 Content-Type: application/json { "device_keys": { "": [""] } } HTTP/1.1 200 OK { "device_keys": { "": { "": { "user_id": "", // Duplicated to be signed "device_id": "", // Duplicated to be signed "valid_until_ts": , "algorithms": [ // List of supported algorithms "m.olm.curve25519-aes-sha256", ], "keys": { // Must include a ed25519 signing key ":": "", }, "signatures:" { // Must be signed with device's ed25519 key "/": { ":": "" } // Must be signed by this server. "": { ":": "" } } } } } } """ PATTERNS = client_v2_patterns( "/keys/query$", releases=() ) def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): """ super(KeyQueryServlet, self).__init__() self.auth = hs.get_auth() self.e2e_keys_handler = hs.get_e2e_keys_handler() @defer.inlineCallbacks def on_POST(self, request): yield self.auth.get_user_by_req(request, allow_guest=True) timeout = parse_integer(request, "timeout", 10 * 1000) body = parse_json_object_from_request(request) result = yield self.e2e_keys_handler.query_devices(body, timeout) defer.returnValue((200, result)) class KeyChangesServlet(RestServlet): """Returns the list of changes of keys between two stream tokens (may return spurious extra results, since we currently ignore the `to` param). GET /keys/changes?from=...&to=... 200 OK { "changed": ["@foo:example.com"] } """ PATTERNS = client_v2_patterns( "/keys/changes$", releases=() ) def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): """ super(KeyChangesServlet, self).__init__() self.auth = hs.get_auth() self.device_handler = hs.get_device_handler() @defer.inlineCallbacks def on_GET(self, request): requester = yield self.auth.get_user_by_req(request, allow_guest=True) from_token_string = parse_string(request, "from") # We want to enforce they do pass us one, but we ignore it and return # changes after the "to" as well as before. parse_string(request, "to") from_token = StreamToken.from_string(from_token_string) user_id = requester.user.to_string() results = yield self.device_handler.get_user_ids_changed( user_id, from_token, ) defer.returnValue((200, results)) class OneTimeKeyServlet(RestServlet): """ POST /keys/claim HTTP/1.1 { "one_time_keys": { "": { "": "" } } } HTTP/1.1 200 OK { "one_time_keys": { "": { "": { ":": "" } } } } """ PATTERNS = client_v2_patterns( "/keys/claim$", releases=() ) def __init__(self, hs): super(OneTimeKeyServlet, self).__init__() self.auth = hs.get_auth() self.e2e_keys_handler = hs.get_e2e_keys_handler() @defer.inlineCallbacks def on_POST(self, request): yield self.auth.get_user_by_req(request, allow_guest=True) timeout = parse_integer(request, "timeout", 10 * 1000) body = parse_json_object_from_request(request) result = yield self.e2e_keys_handler.claim_one_time_keys( body, timeout, ) defer.returnValue((200, result)) def register_servlets(hs, http_server): KeyUploadServlet(hs).register(http_server) KeyQueryServlet(hs).register(http_server) KeyChangesServlet(hs).register(http_server) OneTimeKeyServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/notifications.py000066400000000000000000000063441317335640100241250ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.http.servlet import ( RestServlet, parse_string, parse_integer ) from synapse.events.utils import ( serialize_event, format_event_for_client_v2_without_room_id, ) from ._base import client_v2_patterns import logging logger = logging.getLogger(__name__) class NotificationsServlet(RestServlet): PATTERNS = client_v2_patterns("/notifications$", releases=()) def __init__(self, hs): super(NotificationsServlet, self).__init__() self.store = hs.get_datastore() self.auth = hs.get_auth() self.clock = hs.get_clock() @defer.inlineCallbacks def on_GET(self, request): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() from_token = parse_string(request, "from", required=False) limit = parse_integer(request, "limit", default=50) only = parse_string(request, "only", required=False) limit = min(limit, 500) push_actions = yield self.store.get_push_actions_for_user( user_id, from_token, limit, only_highlight=(only == "highlight") ) receipts_by_room = yield self.store.get_receipts_for_user_with_orderings( user_id, 'm.read' ) notif_event_ids = [pa["event_id"] for pa in push_actions] notif_events = yield self.store.get_events(notif_event_ids) returned_push_actions = [] next_token = None for pa in push_actions: returned_pa = { "room_id": pa["room_id"], "profile_tag": pa["profile_tag"], "actions": pa["actions"], "ts": pa["received_ts"], "event": serialize_event( notif_events[pa["event_id"]], self.clock.time_msec(), event_format=format_event_for_client_v2_without_room_id, ), } if pa["room_id"] not in receipts_by_room: returned_pa["read"] = False else: receipt = receipts_by_room[pa["room_id"]] returned_pa["read"] = ( receipt["topological_ordering"], receipt["stream_ordering"] ) >= ( pa["topological_ordering"], pa["stream_ordering"] ) returned_push_actions.append(returned_pa) next_token = pa["stream_ordering"] defer.returnValue((200, { "notifications": returned_push_actions, "next_token": next_token, })) def register_servlets(hs, http_server): NotificationsServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/openid.py000066400000000000000000000057461317335640100225370ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import client_v2_patterns from synapse.http.servlet import RestServlet, parse_json_object_from_request from synapse.api.errors import AuthError from synapse.util.stringutils import random_string from twisted.internet import defer import logging logger = logging.getLogger(__name__) class IdTokenServlet(RestServlet): """ Get a bearer token that may be passed to a third party to confirm ownership of a matrix user id. The format of the response could be made compatible with the format given in http://openid.net/specs/openid-connect-core-1_0.html#TokenResponse But instead of returning a signed "id_token" the response contains the name of the issuing matrix homeserver. This means that for now the third party will need to check the validity of the "id_token" against the federation /openid/userinfo endpoint of the homeserver. Request: POST /user/{user_id}/openid/request_token?access_token=... HTTP/1.1 {} Response: HTTP/1.1 200 OK { "access_token": "ABDEFGH", "token_type": "Bearer", "matrix_server_name": "example.com", "expires_in": 3600, } """ PATTERNS = client_v2_patterns( "/user/(?P[^/]*)/openid/request_token" ) EXPIRES_MS = 3600 * 1000 def __init__(self, hs): super(IdTokenServlet, self).__init__() self.auth = hs.get_auth() self.store = hs.get_datastore() self.clock = hs.get_clock() self.server_name = hs.config.server_name @defer.inlineCallbacks def on_POST(self, request, user_id): requester = yield self.auth.get_user_by_req(request) if user_id != requester.user.to_string(): raise AuthError(403, "Cannot request tokens for other users.") # Parse the request body to make sure it's JSON, but ignore the contents # for now. parse_json_object_from_request(request) token = random_string(24) ts_valid_until_ms = self.clock.time_msec() + self.EXPIRES_MS yield self.store.insert_open_id_token(token, ts_valid_until_ms, user_id) defer.returnValue((200, { "access_token": token, "token_type": "Bearer", "matrix_server_name": self.server_name, "expires_in": self.EXPIRES_MS / 1000, })) def register_servlets(hs, http_server): IdTokenServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/read_marker.py000066400000000000000000000042701317335640100235240ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.http.servlet import RestServlet, parse_json_object_from_request from ._base import client_v2_patterns import logging logger = logging.getLogger(__name__) class ReadMarkerRestServlet(RestServlet): PATTERNS = client_v2_patterns("/rooms/(?P[^/]*)/read_markers$") def __init__(self, hs): super(ReadMarkerRestServlet, self).__init__() self.auth = hs.get_auth() self.receipts_handler = hs.get_receipts_handler() self.read_marker_handler = hs.get_read_marker_handler() self.presence_handler = hs.get_presence_handler() @defer.inlineCallbacks def on_POST(self, request, room_id): requester = yield self.auth.get_user_by_req(request) yield self.presence_handler.bump_presence_active_time(requester.user) body = parse_json_object_from_request(request) read_event_id = body.get("m.read", None) if read_event_id: yield self.receipts_handler.received_client_receipt( room_id, "m.read", user_id=requester.user.to_string(), event_id=read_event_id ) read_marker_event_id = body.get("m.fully_read", None) if read_marker_event_id: yield self.read_marker_handler.received_client_read_marker( room_id, user_id=requester.user.to_string(), event_id=read_marker_event_id ) defer.returnValue((200, {})) def register_servlets(hs, http_server): ReadMarkerRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/receipts.py000066400000000000000000000036211317335640100230650ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.errors import SynapseError from synapse.http.servlet import RestServlet from ._base import client_v2_patterns import logging logger = logging.getLogger(__name__) class ReceiptRestServlet(RestServlet): PATTERNS = client_v2_patterns( "/rooms/(?P[^/]*)" "/receipt/(?P[^/]*)" "/(?P[^/]*)$" ) def __init__(self, hs): super(ReceiptRestServlet, self).__init__() self.hs = hs self.auth = hs.get_auth() self.receipts_handler = hs.get_receipts_handler() self.presence_handler = hs.get_presence_handler() @defer.inlineCallbacks def on_POST(self, request, room_id, receipt_type, event_id): requester = yield self.auth.get_user_by_req(request) if receipt_type != "m.read": raise SynapseError(400, "Receipt type must be 'm.read'") yield self.presence_handler.bump_presence_active_time(requester.user) yield self.receipts_handler.received_client_receipt( room_id, receipt_type, user_id=requester.user.to_string(), event_id=event_id ) defer.returnValue((200, {})) def register_servlets(hs, http_server): ReceiptRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/register.py000066400000000000000000000562471317335640100231070ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015 - 2016 OpenMarket Ltd # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer import synapse import synapse.types from synapse.api.auth import get_access_token_from_request, has_access_token from synapse.api.constants import LoginType from synapse.types import RoomID, RoomAlias from synapse.api.errors import SynapseError, Codes, UnrecognizedRequestError from synapse.http.servlet import ( RestServlet, parse_json_object_from_request, assert_params_in_request, parse_string ) from synapse.util.msisdn import phone_number_to_msisdn from ._base import client_v2_patterns import logging import hmac from hashlib import sha1 from synapse.util.async import run_on_reactor from synapse.util.ratelimitutils import FederationRateLimiter # We ought to be using hmac.compare_digest() but on older pythons it doesn't # exist. It's a _really minor_ security flaw to use plain string comparison # because the timing attack is so obscured by all the other code here it's # unlikely to make much difference if hasattr(hmac, "compare_digest"): compare_digest = hmac.compare_digest else: def compare_digest(a, b): return a == b logger = logging.getLogger(__name__) class EmailRegisterRequestTokenRestServlet(RestServlet): PATTERNS = client_v2_patterns("/register/email/requestToken$") def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): server """ super(EmailRegisterRequestTokenRestServlet, self).__init__() self.hs = hs self.identity_handler = hs.get_handlers().identity_handler @defer.inlineCallbacks def on_POST(self, request): body = parse_json_object_from_request(request) assert_params_in_request(body, [ 'id_server', 'client_secret', 'email', 'send_attempt' ]) existingUid = yield self.hs.get_datastore().get_user_id_by_threepid( 'email', body['email'] ) if existingUid is not None: raise SynapseError(400, "Email is already in use", Codes.THREEPID_IN_USE) ret = yield self.identity_handler.requestEmailToken(**body) defer.returnValue((200, ret)) class MsisdnRegisterRequestTokenRestServlet(RestServlet): PATTERNS = client_v2_patterns("/register/msisdn/requestToken$") def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): server """ super(MsisdnRegisterRequestTokenRestServlet, self).__init__() self.hs = hs self.identity_handler = hs.get_handlers().identity_handler @defer.inlineCallbacks def on_POST(self, request): body = parse_json_object_from_request(request) assert_params_in_request(body, [ 'id_server', 'client_secret', 'country', 'phone_number', 'send_attempt', ]) msisdn = phone_number_to_msisdn(body['country'], body['phone_number']) existingUid = yield self.hs.get_datastore().get_user_id_by_threepid( 'msisdn', msisdn ) if existingUid is not None: raise SynapseError( 400, "Phone number is already in use", Codes.THREEPID_IN_USE ) ret = yield self.identity_handler.requestMsisdnToken(**body) defer.returnValue((200, ret)) class UsernameAvailabilityRestServlet(RestServlet): PATTERNS = client_v2_patterns("/register/available") def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): server """ super(UsernameAvailabilityRestServlet, self).__init__() self.hs = hs self.registration_handler = hs.get_handlers().registration_handler self.ratelimiter = FederationRateLimiter( hs.get_clock(), # Time window of 2s window_size=2000, # Artificially delay requests if rate > sleep_limit/window_size sleep_limit=1, # Amount of artificial delay to apply sleep_msec=1000, # Error with 429 if more than reject_limit requests are queued reject_limit=1, # Allow 1 request at a time concurrent_requests=1, ) @defer.inlineCallbacks def on_GET(self, request): ip = self.hs.get_ip_from_request(request) with self.ratelimiter.ratelimit(ip) as wait_deferred: yield wait_deferred username = parse_string(request, "username", required=True) yield self.registration_handler.check_username(username) defer.returnValue((200, {"available": True})) class RegisterRestServlet(RestServlet): PATTERNS = client_v2_patterns("/register$") def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): server """ super(RegisterRestServlet, self).__init__() self.hs = hs self.auth = hs.get_auth() self.store = hs.get_datastore() self.auth_handler = hs.get_auth_handler() self.registration_handler = hs.get_handlers().registration_handler self.identity_handler = hs.get_handlers().identity_handler self.room_member_handler = hs.get_handlers().room_member_handler self.device_handler = hs.get_device_handler() self.macaroon_gen = hs.get_macaroon_generator() @defer.inlineCallbacks def on_POST(self, request): yield run_on_reactor() body = parse_json_object_from_request(request) kind = "user" if "kind" in request.args: kind = request.args["kind"][0] if kind == "guest": ret = yield self._do_guest_registration(body) defer.returnValue(ret) return elif kind != "user": raise UnrecognizedRequestError( "Do not understand membership kind: %s" % (kind,) ) # we do basic sanity checks here because the auth layer will store these # in sessions. Pull out the username/password provided to us. desired_password = None if 'password' in body: if (not isinstance(body['password'], basestring) or len(body['password']) > 512): raise SynapseError(400, "Invalid password") desired_password = body["password"] desired_username = None if 'username' in body: if (not isinstance(body['username'], basestring) or len(body['username']) > 512): raise SynapseError(400, "Invalid username") desired_username = body['username'] appservice = None if has_access_token(request): appservice = yield self.auth.get_appservice_by_req(request) # fork off as soon as possible for ASes and shared secret auth which # have completely different registration flows to normal users # == Application Service Registration == if appservice: # Set the desired user according to the AS API (which uses the # 'user' key not 'username'). Since this is a new addition, we'll # fallback to 'username' if they gave one. desired_username = body.get("user", desired_username) access_token = get_access_token_from_request(request) if isinstance(desired_username, basestring): result = yield self._do_appservice_registration( desired_username, access_token, body ) defer.returnValue((200, result)) # we throw for non 200 responses return # == Shared Secret Registration == (e.g. create new user scripts) if 'mac' in body: # FIXME: Should we really be determining if this is shared secret # auth based purely on the 'mac' key? result = yield self._do_shared_secret_registration( desired_username, desired_password, body ) defer.returnValue((200, result)) # we throw for non 200 responses return # == Normal User Registration == (everyone else) if not self.hs.config.enable_registration: raise SynapseError(403, "Registration has been disabled") guest_access_token = body.get("guest_access_token", None) if ( 'initial_device_display_name' in body and 'password' not in body ): # ignore 'initial_device_display_name' if sent without # a password to work around a client bug where it sent # the 'initial_device_display_name' param alone, wiping out # the original registration params logger.warn("Ignoring initial_device_display_name without password") del body['initial_device_display_name'] session_id = self.auth_handler.get_session_id(body) registered_user_id = None if session_id: # if we get a registered user id out of here, it means we previously # registered a user for this session, so we could just return the # user here. We carry on and go through the auth checks though, # for paranoia. registered_user_id = self.auth_handler.get_session_data( session_id, "registered_user_id", None ) if desired_username is not None: yield self.registration_handler.check_username( desired_username, guest_access_token=guest_access_token, assigned_user_id=registered_user_id, ) # Only give msisdn flows if the x_show_msisdn flag is given: # this is a hack to work around the fact that clients were shipped # that use fallback registration if they see any flows that they don't # recognise, which means we break registration for these clients if we # advertise msisdn flows. Once usage of Riot iOS <=0.3.9 and Riot # Android <=0.6.9 have fallen below an acceptable threshold, this # parameter should go away and we should always advertise msisdn flows. show_msisdn = False if 'x_show_msisdn' in body and body['x_show_msisdn']: show_msisdn = True if self.hs.config.enable_registration_captcha: flows = [ [LoginType.RECAPTCHA], [LoginType.EMAIL_IDENTITY, LoginType.RECAPTCHA], ] if show_msisdn: flows.extend([ [LoginType.MSISDN, LoginType.RECAPTCHA], [LoginType.MSISDN, LoginType.EMAIL_IDENTITY, LoginType.RECAPTCHA], ]) else: flows = [ [LoginType.DUMMY], [LoginType.EMAIL_IDENTITY], ] if show_msisdn: flows.extend([ [LoginType.MSISDN], [LoginType.MSISDN, LoginType.EMAIL_IDENTITY], ]) authed, auth_result, params, session_id = yield self.auth_handler.check_auth( flows, body, self.hs.get_ip_from_request(request) ) if not authed: defer.returnValue((401, auth_result)) return if registered_user_id is not None: logger.info( "Already registered user ID %r for this session", registered_user_id ) # don't re-register the threepids add_email = False add_msisdn = False else: # NB: This may be from the auth handler and NOT from the POST if 'password' not in params: raise SynapseError(400, "Missing password.", Codes.MISSING_PARAM) desired_username = params.get("username", None) new_password = params.get("password", None) guest_access_token = params.get("guest_access_token", None) (registered_user_id, _) = yield self.registration_handler.register( localpart=desired_username, password=new_password, guest_access_token=guest_access_token, generate_token=False, ) # auto-join the user to any rooms we're supposed to dump them into fake_requester = synapse.types.create_requester(registered_user_id) for r in self.hs.config.auto_join_rooms: try: yield self._join_user_to_room(fake_requester, r) except Exception as e: logger.error("Failed to join new user to %r: %r", r, e) # remember that we've now registered that user account, and with # what user ID (since the user may not have specified) self.auth_handler.set_session_data( session_id, "registered_user_id", registered_user_id ) add_email = True add_msisdn = True return_dict = yield self._create_registration_details( registered_user_id, params ) if add_email and auth_result and LoginType.EMAIL_IDENTITY in auth_result: threepid = auth_result[LoginType.EMAIL_IDENTITY] yield self._register_email_threepid( registered_user_id, threepid, return_dict["access_token"], params.get("bind_email") ) if add_msisdn and auth_result and LoginType.MSISDN in auth_result: threepid = auth_result[LoginType.MSISDN] yield self._register_msisdn_threepid( registered_user_id, threepid, return_dict["access_token"], params.get("bind_msisdn") ) defer.returnValue((200, return_dict)) def on_OPTIONS(self, _): return 200, {} @defer.inlineCallbacks def _join_user_to_room(self, requester, room_identifier): room_id = None if RoomID.is_valid(room_identifier): room_id = room_identifier elif RoomAlias.is_valid(room_identifier): room_alias = RoomAlias.from_string(room_identifier) room_id, remote_room_hosts = ( yield self.room_member_handler.lookup_room_alias(room_alias) ) room_id = room_id.to_string() else: raise SynapseError(400, "%s was not legal room ID or room alias" % ( room_identifier, )) yield self.room_member_handler.update_membership( requester=requester, target=requester.user, room_id=room_id, action="join", ) @defer.inlineCallbacks def _do_appservice_registration(self, username, as_token, body): user_id = yield self.registration_handler.appservice_register( username, as_token ) defer.returnValue((yield self._create_registration_details(user_id, body))) @defer.inlineCallbacks def _do_shared_secret_registration(self, username, password, body): if not self.hs.config.registration_shared_secret: raise SynapseError(400, "Shared secret registration is not enabled") user = username.encode("utf-8") # str() because otherwise hmac complains that 'unicode' does not # have the buffer interface got_mac = str(body["mac"]) want_mac = hmac.new( key=self.hs.config.registration_shared_secret, msg=user, digestmod=sha1, ).hexdigest() if not compare_digest(want_mac, got_mac): raise SynapseError( 403, "HMAC incorrect", ) (user_id, _) = yield self.registration_handler.register( localpart=username, password=password, generate_token=False, ) result = yield self._create_registration_details(user_id, body) defer.returnValue(result) @defer.inlineCallbacks def _register_email_threepid(self, user_id, threepid, token, bind_email): """Add an email address as a 3pid identifier Also adds an email pusher for the email address, if configured in the HS config Also optionally binds emails to the given user_id on the identity server Args: user_id (str): id of user threepid (object): m.login.email.identity auth response token (str): access_token for the user bind_email (bool): true if the client requested the email to be bound at the identity server Returns: defer.Deferred: """ reqd = ('medium', 'address', 'validated_at') if any(x not in threepid for x in reqd): # This will only happen if the ID server returns a malformed response logger.info("Can't add incomplete 3pid") return yield self.auth_handler.add_threepid( user_id, threepid['medium'], threepid['address'], threepid['validated_at'], ) # And we add an email pusher for them by default, but only # if email notifications are enabled (so people don't start # getting mail spam where they weren't before if email # notifs are set up on a home server) if (self.hs.config.email_enable_notifs and self.hs.config.email_notif_for_new_users): # Pull the ID of the access token back out of the db # It would really make more sense for this to be passed # up when the access token is saved, but that's quite an # invasive change I'd rather do separately. user_tuple = yield self.store.get_user_by_access_token( token ) token_id = user_tuple["token_id"] yield self.hs.get_pusherpool().add_pusher( user_id=user_id, access_token=token_id, kind="email", app_id="m.email", app_display_name="Email Notifications", device_display_name=threepid["address"], pushkey=threepid["address"], lang=None, # We don't know a user's language here data={}, ) if bind_email: logger.info("bind_email specified: binding") logger.debug("Binding emails %s to %s" % ( threepid, user_id )) yield self.identity_handler.bind_threepid( threepid['threepid_creds'], user_id ) else: logger.info("bind_email not specified: not binding email") @defer.inlineCallbacks def _register_msisdn_threepid(self, user_id, threepid, token, bind_msisdn): """Add a phone number as a 3pid identifier Also optionally binds msisdn to the given user_id on the identity server Args: user_id (str): id of user threepid (object): m.login.msisdn auth response token (str): access_token for the user bind_email (bool): true if the client requested the email to be bound at the identity server Returns: defer.Deferred: """ reqd = ('medium', 'address', 'validated_at') if any(x not in threepid for x in reqd): # This will only happen if the ID server returns a malformed response logger.info("Can't add incomplete 3pid") defer.returnValue() yield self.auth_handler.add_threepid( user_id, threepid['medium'], threepid['address'], threepid['validated_at'], ) if bind_msisdn: logger.info("bind_msisdn specified: binding") logger.debug("Binding msisdn %s to %s", threepid, user_id) yield self.identity_handler.bind_threepid( threepid['threepid_creds'], user_id ) else: logger.info("bind_msisdn not specified: not binding msisdn") @defer.inlineCallbacks def _create_registration_details(self, user_id, params): """Complete registration of newly-registered user Allocates device_id if one was not given; also creates access_token. Args: (str) user_id: full canonical @user:id (object) params: registration parameters, from which we pull device_id and initial_device_name Returns: defer.Deferred: (object) dictionary for response from /register """ device_id = yield self._register_device(user_id, params) access_token = ( yield self.auth_handler.get_access_token_for_user_id( user_id, device_id=device_id, initial_display_name=params.get("initial_device_display_name") ) ) defer.returnValue({ "user_id": user_id, "access_token": access_token, "home_server": self.hs.hostname, "device_id": device_id, }) def _register_device(self, user_id, params): """Register a device for a user. This is called after the user's credentials have been validated, but before the access token has been issued. Args: (str) user_id: full canonical @user:id (object) params: registration parameters, from which we pull device_id and initial_device_name Returns: defer.Deferred: (str) device_id """ # register the user's device device_id = params.get("device_id") initial_display_name = params.get("initial_device_display_name") return self.device_handler.check_device_registered( user_id, device_id, initial_display_name ) @defer.inlineCallbacks def _do_guest_registration(self, params): if not self.hs.config.allow_guest_access: defer.returnValue((403, "Guest access is disabled")) user_id, _ = yield self.registration_handler.register( generate_token=False, make_guest=True ) # we don't allow guests to specify their own device_id, because # we have nowhere to store it. device_id = synapse.api.auth.GUEST_DEVICE_ID initial_display_name = params.get("initial_device_display_name") yield self.device_handler.check_device_registered( user_id, device_id, initial_display_name ) access_token = self.macaroon_gen.generate_access_token( user_id, ["guest = true"] ) defer.returnValue((200, { "user_id": user_id, "device_id": device_id, "access_token": access_token, "home_server": self.hs.hostname, })) def register_servlets(hs, http_server): EmailRegisterRequestTokenRestServlet(hs).register(http_server) MsisdnRegisterRequestTokenRestServlet(hs).register(http_server) UsernameAvailabilityRestServlet(hs).register(http_server) RegisterRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/report_event.py000066400000000000000000000034141317335640100237630ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.http.servlet import RestServlet, parse_json_object_from_request from ._base import client_v2_patterns import logging logger = logging.getLogger(__name__) class ReportEventRestServlet(RestServlet): PATTERNS = client_v2_patterns( "/rooms/(?P[^/]*)/report/(?P[^/]*)$" ) def __init__(self, hs): super(ReportEventRestServlet, self).__init__() self.hs = hs self.auth = hs.get_auth() self.clock = hs.get_clock() self.store = hs.get_datastore() @defer.inlineCallbacks def on_POST(self, request, room_id, event_id): requester = yield self.auth.get_user_by_req(request) user_id = requester.user.to_string() body = parse_json_object_from_request(request) yield self.store.add_event_report( room_id=room_id, event_id=event_id, user_id=user_id, reason=body.get("reason"), content=body, received_ts=self.clock.time_msec(), ) defer.returnValue((200, {})) def register_servlets(hs, http_server): ReportEventRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/sendtodevice.py000066400000000000000000000042231317335640100237220ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from twisted.internet import defer from synapse.http import servlet from synapse.http.servlet import parse_json_object_from_request from synapse.rest.client.transactions import HttpTransactionCache from ._base import client_v2_patterns logger = logging.getLogger(__name__) class SendToDeviceRestServlet(servlet.RestServlet): PATTERNS = client_v2_patterns( "/sendToDevice/(?P[^/]*)/(?P[^/]*)$", releases=[], v2_alpha=False ) def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): server """ super(SendToDeviceRestServlet, self).__init__() self.hs = hs self.auth = hs.get_auth() self.txns = HttpTransactionCache(hs.get_clock()) self.device_message_handler = hs.get_device_message_handler() def on_PUT(self, request, message_type, txn_id): return self.txns.fetch_or_execute_request( request, self._put, request, message_type, txn_id ) @defer.inlineCallbacks def _put(self, request, message_type, txn_id): requester = yield self.auth.get_user_by_req(request, allow_guest=True) content = parse_json_object_from_request(request) sender_user_id = requester.user.to_string() yield self.device_message_handler.send_device_message( sender_user_id, message_type, content["messages"] ) response = (200, {}) defer.returnValue(response) def register_servlets(hs, http_server): SendToDeviceRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/sync.py000066400000000000000000000330561317335640100222300ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.http.servlet import ( RestServlet, parse_string, parse_integer, parse_boolean ) from synapse.handlers.presence import format_user_presence_state from synapse.handlers.sync import SyncConfig from synapse.types import StreamToken from synapse.events.utils import ( serialize_event, format_event_for_client_v2_without_room_id, ) from synapse.api.filtering import FilterCollection, DEFAULT_FILTER_COLLECTION from synapse.api.errors import SynapseError from synapse.api.constants import PresenceState from ._base import client_v2_patterns from ._base import set_timeline_upper_limit import itertools import logging import ujson as json logger = logging.getLogger(__name__) class SyncRestServlet(RestServlet): """ GET parameters:: timeout(int): How long to wait for new events in milliseconds. since(batch_token): Batch token when asking for incremental deltas. set_presence(str): What state the device presence should be set to. default is "online". filter(filter_id): A filter to apply to the events returned. Response JSON:: { "next_batch": // batch token for the next /sync "presence": // presence data for the user. "rooms": { "join": { // Joined rooms being updated. "${room_id}": { // Id of the room being updated "event_map": // Map of EventID -> event JSON. "timeline": { // The recent events in the room if gap is "true" "limited": // Was the per-room event limit exceeded? // otherwise the next events in the room. "events": [] // list of EventIDs in the "event_map". "prev_batch": // back token for getting previous events. } "state": {"events": []} // list of EventIDs updating the // current state to be what it should // be at the end of the batch. "ephemeral": {"events": []} // list of event objects } }, "invite": {}, // Invited rooms being updated. "leave": {} // Archived rooms being updated. } } """ PATTERNS = client_v2_patterns("/sync$") ALLOWED_PRESENCE = set(["online", "offline"]) def __init__(self, hs): super(SyncRestServlet, self).__init__() self.hs = hs self.auth = hs.get_auth() self.sync_handler = hs.get_sync_handler() self.clock = hs.get_clock() self.filtering = hs.get_filtering() self.presence_handler = hs.get_presence_handler() @defer.inlineCallbacks def on_GET(self, request): if "from" in request.args: # /events used to use 'from', but /sync uses 'since'. # Lets be helpful and whine if we see a 'from'. raise SynapseError( 400, "'from' is not a valid query parameter. Did you mean 'since'?" ) requester = yield self.auth.get_user_by_req( request, allow_guest=True ) user = requester.user device_id = requester.device_id timeout = parse_integer(request, "timeout", default=0) since = parse_string(request, "since") set_presence = parse_string( request, "set_presence", default="online", allowed_values=self.ALLOWED_PRESENCE ) filter_id = parse_string(request, "filter", default=None) full_state = parse_boolean(request, "full_state", default=False) logger.debug( "/sync: user=%r, timeout=%r, since=%r," " set_presence=%r, filter_id=%r, device_id=%r" % ( user, timeout, since, set_presence, filter_id, device_id ) ) request_key = (user, timeout, since, filter_id, full_state, device_id) if filter_id: if filter_id.startswith('{'): try: filter_object = json.loads(filter_id) set_timeline_upper_limit(filter_object, self.hs.config.filter_timeline_limit) except: raise SynapseError(400, "Invalid filter JSON") self.filtering.check_valid_filter(filter_object) filter = FilterCollection(filter_object) else: filter = yield self.filtering.get_user_filter( user.localpart, filter_id ) else: filter = DEFAULT_FILTER_COLLECTION sync_config = SyncConfig( user=user, filter_collection=filter, is_guest=requester.is_guest, request_key=request_key, device_id=device_id, ) if since is not None: since_token = StreamToken.from_string(since) else: since_token = None affect_presence = set_presence != PresenceState.OFFLINE if affect_presence: yield self.presence_handler.set_state(user, {"presence": set_presence}, True) context = yield self.presence_handler.user_syncing( user.to_string(), affect_presence=affect_presence, ) with context: sync_result = yield self.sync_handler.wait_for_sync_for_user( sync_config, since_token=since_token, timeout=timeout, full_state=full_state ) time_now = self.clock.time_msec() response_content = self.encode_response( time_now, sync_result, requester.access_token_id, filter ) defer.returnValue((200, response_content)) @staticmethod def encode_response(time_now, sync_result, access_token_id, filter): joined = SyncRestServlet.encode_joined( sync_result.joined, time_now, access_token_id, filter.event_fields ) invited = SyncRestServlet.encode_invited( sync_result.invited, time_now, access_token_id, ) archived = SyncRestServlet.encode_archived( sync_result.archived, time_now, access_token_id, filter.event_fields, ) return { "account_data": {"events": sync_result.account_data}, "to_device": {"events": sync_result.to_device}, "device_lists": { "changed": list(sync_result.device_lists.changed), "left": list(sync_result.device_lists.left), }, "presence": SyncRestServlet.encode_presence( sync_result.presence, time_now ), "rooms": { "join": joined, "invite": invited, "leave": archived, }, "groups": { "join": sync_result.groups.join, "invite": sync_result.groups.invite, "leave": sync_result.groups.leave, }, "device_one_time_keys_count": sync_result.device_one_time_keys_count, "next_batch": sync_result.next_batch.to_string(), } @staticmethod def encode_presence(events, time_now): return { "events": [ { "type": "m.presence", "sender": event.user_id, "content": format_user_presence_state( event, time_now, include_user_id=False ), } for event in events ] } @staticmethod def encode_joined(rooms, time_now, token_id, event_fields): """ Encode the joined rooms in a sync result Args: rooms(list[synapse.handlers.sync.JoinedSyncResult]): list of sync results for rooms this user is joined to time_now(int): current time - used as a baseline for age calculations token_id(int): ID of the user's auth token - used for namespacing of transaction IDs event_fields(list): List of event fields to include. If empty, all fields will be returned. Returns: dict[str, dict[str, object]]: the joined rooms list, in our response format """ joined = {} for room in rooms: joined[room.room_id] = SyncRestServlet.encode_room( room, time_now, token_id, only_fields=event_fields ) return joined @staticmethod def encode_invited(rooms, time_now, token_id): """ Encode the invited rooms in a sync result Args: rooms(list[synapse.handlers.sync.InvitedSyncResult]): list of sync results for rooms this user is joined to time_now(int): current time - used as a baseline for age calculations token_id(int): ID of the user's auth token - used for namespacing of transaction IDs Returns: dict[str, dict[str, object]]: the invited rooms list, in our response format """ invited = {} for room in rooms: invite = serialize_event( room.invite, time_now, token_id=token_id, event_format=format_event_for_client_v2_without_room_id, is_invite=True, ) unsigned = dict(invite.get("unsigned", {})) invite["unsigned"] = unsigned invited_state = list(unsigned.pop("invite_room_state", [])) invited_state.append(invite) invited[room.room_id] = { "invite_state": {"events": invited_state} } return invited @staticmethod def encode_archived(rooms, time_now, token_id, event_fields): """ Encode the archived rooms in a sync result Args: rooms (list[synapse.handlers.sync.ArchivedSyncResult]): list of sync results for rooms this user is joined to time_now(int): current time - used as a baseline for age calculations token_id(int): ID of the user's auth token - used for namespacing of transaction IDs event_fields(list): List of event fields to include. If empty, all fields will be returned. Returns: dict[str, dict[str, object]]: The invited rooms list, in our response format """ joined = {} for room in rooms: joined[room.room_id] = SyncRestServlet.encode_room( room, time_now, token_id, joined=False, only_fields=event_fields ) return joined @staticmethod def encode_room(room, time_now, token_id, joined=True, only_fields=None): """ Args: room (JoinedSyncResult|ArchivedSyncResult): sync result for a single room time_now (int): current time - used as a baseline for age calculations token_id (int): ID of the user's auth token - used for namespacing of transaction IDs joined (bool): True if the user is joined to this room - will mean we handle ephemeral events only_fields(list): Optional. The list of event fields to include. Returns: dict[str, object]: the room, encoded in our response format """ def serialize(event): # TODO(mjark): Respect formatting requirements in the filter. return serialize_event( event, time_now, token_id=token_id, event_format=format_event_for_client_v2_without_room_id, only_event_fields=only_fields, ) state_dict = room.state timeline_events = room.timeline.events state_events = state_dict.values() for event in itertools.chain(state_events, timeline_events): # We've had bug reports that events were coming down under the # wrong room. if event.room_id != room.room_id: logger.warn( "Event %r is under room %r instead of %r", event.event_id, room.room_id, event.room_id, ) serialized_state = [serialize(e) for e in state_events] serialized_timeline = [serialize(e) for e in timeline_events] account_data = room.account_data result = { "timeline": { "events": serialized_timeline, "prev_batch": room.timeline.prev_batch.to_string(), "limited": room.timeline.limited, }, "state": {"events": serialized_state}, "account_data": {"events": account_data}, } if joined: ephemeral_events = room.ephemeral result["ephemeral"] = {"events": ephemeral_events} result["unread_notifications"] = room.unread_notifications return result def register_servlets(hs, http_server): SyncRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/tags.py000066400000000000000000000062341317335640100222100ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ._base import client_v2_patterns from synapse.http.servlet import RestServlet, parse_json_object_from_request from synapse.api.errors import AuthError from twisted.internet import defer import logging logger = logging.getLogger(__name__) class TagListServlet(RestServlet): """ GET /user/{user_id}/rooms/{room_id}/tags HTTP/1.1 """ PATTERNS = client_v2_patterns( "/user/(?P[^/]*)/rooms/(?P[^/]*)/tags" ) def __init__(self, hs): super(TagListServlet, self).__init__() self.auth = hs.get_auth() self.store = hs.get_datastore() @defer.inlineCallbacks def on_GET(self, request, user_id, room_id): requester = yield self.auth.get_user_by_req(request) if user_id != requester.user.to_string(): raise AuthError(403, "Cannot get tags for other users.") tags = yield self.store.get_tags_for_room(user_id, room_id) defer.returnValue((200, {"tags": tags})) class TagServlet(RestServlet): """ PUT /user/{user_id}/rooms/{room_id}/tags/{tag} HTTP/1.1 DELETE /user/{user_id}/rooms/{room_id}/tags/{tag} HTTP/1.1 """ PATTERNS = client_v2_patterns( "/user/(?P[^/]*)/rooms/(?P[^/]*)/tags/(?P[^/]*)" ) def __init__(self, hs): super(TagServlet, self).__init__() self.auth = hs.get_auth() self.store = hs.get_datastore() self.notifier = hs.get_notifier() @defer.inlineCallbacks def on_PUT(self, request, user_id, room_id, tag): requester = yield self.auth.get_user_by_req(request) if user_id != requester.user.to_string(): raise AuthError(403, "Cannot add tags for other users.") body = parse_json_object_from_request(request) max_id = yield self.store.add_tag_to_room(user_id, room_id, tag, body) self.notifier.on_new_event( "account_data_key", max_id, users=[user_id] ) defer.returnValue((200, {})) @defer.inlineCallbacks def on_DELETE(self, request, user_id, room_id, tag): requester = yield self.auth.get_user_by_req(request) if user_id != requester.user.to_string(): raise AuthError(403, "Cannot add tags for other users.") max_id = yield self.store.remove_tag_from_room(user_id, room_id, tag) self.notifier.on_new_event( "account_data_key", max_id, users=[user_id] ) defer.returnValue((200, {})) def register_servlets(hs, http_server): TagListServlet(hs).register(http_server) TagServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/thirdparty.py000066400000000000000000000075401317335640100234450ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from twisted.internet import defer from synapse.api.constants import ThirdPartyEntityKind from synapse.http.servlet import RestServlet from ._base import client_v2_patterns logger = logging.getLogger(__name__) class ThirdPartyProtocolsServlet(RestServlet): PATTERNS = client_v2_patterns("/thirdparty/protocols", releases=()) def __init__(self, hs): super(ThirdPartyProtocolsServlet, self).__init__() self.auth = hs.get_auth() self.appservice_handler = hs.get_application_service_handler() @defer.inlineCallbacks def on_GET(self, request): yield self.auth.get_user_by_req(request, allow_guest=True) protocols = yield self.appservice_handler.get_3pe_protocols() defer.returnValue((200, protocols)) class ThirdPartyProtocolServlet(RestServlet): PATTERNS = client_v2_patterns("/thirdparty/protocol/(?P[^/]+)$", releases=()) def __init__(self, hs): super(ThirdPartyProtocolServlet, self).__init__() self.auth = hs.get_auth() self.appservice_handler = hs.get_application_service_handler() @defer.inlineCallbacks def on_GET(self, request, protocol): yield self.auth.get_user_by_req(request, allow_guest=True) protocols = yield self.appservice_handler.get_3pe_protocols( only_protocol=protocol, ) if protocol in protocols: defer.returnValue((200, protocols[protocol])) else: defer.returnValue((404, {"error": "Unknown protocol"})) class ThirdPartyUserServlet(RestServlet): PATTERNS = client_v2_patterns("/thirdparty/user(/(?P[^/]+))?$", releases=()) def __init__(self, hs): super(ThirdPartyUserServlet, self).__init__() self.auth = hs.get_auth() self.appservice_handler = hs.get_application_service_handler() @defer.inlineCallbacks def on_GET(self, request, protocol): yield self.auth.get_user_by_req(request, allow_guest=True) fields = request.args fields.pop("access_token", None) results = yield self.appservice_handler.query_3pe( ThirdPartyEntityKind.USER, protocol, fields ) defer.returnValue((200, results)) class ThirdPartyLocationServlet(RestServlet): PATTERNS = client_v2_patterns("/thirdparty/location(/(?P[^/]+))?$", releases=()) def __init__(self, hs): super(ThirdPartyLocationServlet, self).__init__() self.auth = hs.get_auth() self.appservice_handler = hs.get_application_service_handler() @defer.inlineCallbacks def on_GET(self, request, protocol): yield self.auth.get_user_by_req(request, allow_guest=True) fields = request.args fields.pop("access_token", None) results = yield self.appservice_handler.query_3pe( ThirdPartyEntityKind.LOCATION, protocol, fields ) defer.returnValue((200, results)) def register_servlets(hs, http_server): ThirdPartyProtocolsServlet(hs).register(http_server) ThirdPartyProtocolServlet(hs).register(http_server) ThirdPartyUserServlet(hs).register(http_server) ThirdPartyLocationServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/tokenrefresh.py000066400000000000000000000024011317335640100237410ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer from synapse.api.errors import AuthError from synapse.http.servlet import RestServlet from ._base import client_v2_patterns class TokenRefreshRestServlet(RestServlet): """ Exchanges refresh tokens for a pair of an access token and a new refresh token. """ PATTERNS = client_v2_patterns("/tokenrefresh") def __init__(self, hs): super(TokenRefreshRestServlet, self).__init__() @defer.inlineCallbacks def on_POST(self, request): raise AuthError(403, "tokenrefresh is no longer supported.") def register_servlets(hs, http_server): TokenRefreshRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/v2_alpha/user_directory.py000066400000000000000000000047351317335640100243200ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2017 Vector Creations Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from twisted.internet import defer from synapse.api.errors import SynapseError from synapse.http.servlet import RestServlet, parse_json_object_from_request from ._base import client_v2_patterns logger = logging.getLogger(__name__) class UserDirectorySearchRestServlet(RestServlet): PATTERNS = client_v2_patterns("/user_directory/search$") def __init__(self, hs): """ Args: hs (synapse.server.HomeServer): server """ super(UserDirectorySearchRestServlet, self).__init__() self.hs = hs self.auth = hs.get_auth() self.user_directory_handler = hs.get_user_directory_handler() @defer.inlineCallbacks def on_POST(self, request): """Searches for users in directory Returns: dict of the form:: { "limited": , # whether there were more results or not "results": [ # Ordered by best match first { "user_id": , "display_name": , "avatar_url": } ] } """ requester = yield self.auth.get_user_by_req(request, allow_guest=False) user_id = requester.user.to_string() body = parse_json_object_from_request(request) limit = body.get("limit", 10) limit = min(limit, 50) try: search_term = body["search_term"] except: raise SynapseError(400, "`search_term` is required field") results = yield self.user_directory_handler.search_users( user_id, search_term, limit, ) defer.returnValue((200, results)) def register_servlets(hs, http_server): UserDirectorySearchRestServlet(hs).register(http_server) synapse-0.24.0/synapse/rest/client/versions.py000066400000000000000000000020721317335640100214220ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.http.servlet import RestServlet import logging import re logger = logging.getLogger(__name__) class VersionsRestServlet(RestServlet): PATTERNS = [re.compile("^/_matrix/client/versions$")] def on_GET(self, request): return (200, { "versions": [ "r0.0.1", "r0.1.0", "r0.2.0", ] }) def register_servlets(http_server): VersionsRestServlet().register(http_server) synapse-0.24.0/synapse/rest/key/000077500000000000000000000000001317335640100165115ustar00rootroot00000000000000synapse-0.24.0/synapse/rest/key/__init__.py000066400000000000000000000011401317335640100206160ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. synapse-0.24.0/synapse/rest/key/v1/000077500000000000000000000000001317335640100170375ustar00rootroot00000000000000synapse-0.24.0/synapse/rest/key/v1/__init__.py000066400000000000000000000011401317335640100211440ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. synapse-0.24.0/synapse/rest/key/v1/server_key_resource.py000066400000000000000000000056361317335640100235100ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.web.resource import Resource from synapse.http.server import respond_with_json_bytes from signedjson.sign import sign_json from unpaddedbase64 import encode_base64 from canonicaljson import encode_canonical_json from OpenSSL import crypto import logging logger = logging.getLogger(__name__) class LocalKey(Resource): """HTTP resource containing encoding the TLS X.509 certificate and NACL signature verification keys for this server:: GET /key HTTP/1.1 HTTP/1.1 200 OK Content-Type: application/json { "server_name": "this.server.example.com" "verify_keys": { "algorithm:version": # base64 encoded NACL verification key. }, "tls_certificate": # base64 ASN.1 DER encoded X.509 tls cert. "signatures": { "this.server.example.com": { "algorithm:version": # NACL signature for this server. } } } """ def __init__(self, hs): self.version_string = hs.version_string self.response_body = encode_canonical_json( self.response_json_object(hs.config) ) Resource.__init__(self) @staticmethod def response_json_object(server_config): verify_keys = {} for key in server_config.signing_key: verify_key_bytes = key.verify_key.encode() key_id = "%s:%s" % (key.alg, key.version) verify_keys[key_id] = encode_base64(verify_key_bytes) x509_certificate_bytes = crypto.dump_certificate( crypto.FILETYPE_ASN1, server_config.tls_certificate ) json_object = { u"server_name": server_config.server_name, u"verify_keys": verify_keys, u"tls_certificate": encode_base64(x509_certificate_bytes) } for key in server_config.signing_key: json_object = sign_json( json_object, server_config.server_name, key, ) return json_object def render_GET(self, request): return respond_with_json_bytes( request, 200, self.response_body, version_string=self.version_string ) def getChild(self, name, request): if name == '': return self synapse-0.24.0/synapse/rest/key/v2/000077500000000000000000000000001317335640100170405ustar00rootroot00000000000000synapse-0.24.0/synapse/rest/key/v2/__init__.py000066400000000000000000000016331317335640100211540ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.web.resource import Resource from .local_key_resource import LocalKey from .remote_key_resource import RemoteKey class KeyApiV2Resource(Resource): def __init__(self, hs): Resource.__init__(self) self.putChild("server", LocalKey(hs)) self.putChild("query", RemoteKey(hs)) synapse-0.24.0/synapse/rest/key/v2/local_key_resource.py000066400000000000000000000102231317335640100232610ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.web.resource import Resource from synapse.http.server import respond_with_json_bytes from signedjson.sign import sign_json from unpaddedbase64 import encode_base64 from canonicaljson import encode_canonical_json import logging logger = logging.getLogger(__name__) class LocalKey(Resource): """HTTP resource containing encoding the TLS X.509 certificate and NACL signature verification keys for this server:: GET /_matrix/key/v2/server/a.key.id HTTP/1.1 HTTP/1.1 200 OK Content-Type: application/json { "valid_until_ts": # integer posix timestamp when this result expires. "server_name": "this.server.example.com" "verify_keys": { "algorithm:version": { "key": # base64 encoded NACL verification key. } }, "old_verify_keys": { "algorithm:version": { "expired_ts": # integer posix timestamp when the key expired. "key": # base64 encoded NACL verification key. } }, "tls_fingerprints": [ # Fingerprints of the TLS certs this server uses. { "sha256": # base64 encoded sha256 fingerprint of the X509 cert }, ], "signatures": { "this.server.example.com": { "algorithm:version": # NACL signature for this server } } } """ isLeaf = True def __init__(self, hs): self.version_string = hs.version_string self.config = hs.config self.clock = hs.clock self.update_response_body(self.clock.time_msec()) Resource.__init__(self) def update_response_body(self, time_now_msec): refresh_interval = self.config.key_refresh_interval self.valid_until_ts = int(time_now_msec + refresh_interval) self.response_body = encode_canonical_json(self.response_json_object()) def response_json_object(self): verify_keys = {} for key in self.config.signing_key: verify_key_bytes = key.verify_key.encode() key_id = "%s:%s" % (key.alg, key.version) verify_keys[key_id] = { u"key": encode_base64(verify_key_bytes) } old_verify_keys = {} for key_id, key in self.config.old_signing_keys.items(): verify_key_bytes = key.encode() old_verify_keys[key_id] = { u"key": encode_base64(verify_key_bytes), u"expired_ts": key.expired_ts, } tls_fingerprints = self.config.tls_fingerprints json_object = { u"valid_until_ts": self.valid_until_ts, u"server_name": self.config.server_name, u"verify_keys": verify_keys, u"old_verify_keys": old_verify_keys, u"tls_fingerprints": tls_fingerprints, } for key in self.config.signing_key: json_object = sign_json( json_object, self.config.server_name, key, ) return json_object def render_GET(self, request): time_now = self.clock.time_msec() # Update the expiry time if less than half the interval remains. if time_now + self.config.key_refresh_interval / 2 > self.valid_until_ts: self.update_response_body(time_now) return respond_with_json_bytes( request, 200, self.response_body, version_string=self.version_string ) synapse-0.24.0/synapse/rest/key/v2/remote_key_resource.py000066400000000000000000000212401317335640100234630ustar00rootroot00000000000000# Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.http.server import request_handler, respond_with_json_bytes from synapse.http.servlet import parse_integer, parse_json_object_from_request from synapse.api.errors import SynapseError, Codes from synapse.crypto.keyring import KeyLookupError from twisted.web.resource import Resource from twisted.web.server import NOT_DONE_YET from twisted.internet import defer from io import BytesIO import logging logger = logging.getLogger(__name__) class RemoteKey(Resource): """HTTP resource for retreiving the TLS certificate and NACL signature verification keys for a collection of servers. Checks that the reported X.509 TLS certificate matches the one used in the HTTPS connection. Checks that the NACL signature for the remote server is valid. Returns a dict of JSON signed by both the remote server and by this server. Supports individual GET APIs and a bulk query POST API. Requsts: GET /_matrix/key/v2/query/remote.server.example.com HTTP/1.1 GET /_matrix/key/v2/query/remote.server.example.com/a.key.id HTTP/1.1 POST /_matrix/v2/query HTTP/1.1 Content-Type: application/json { "server_keys": { "remote.server.example.com": { "a.key.id": { "minimum_valid_until_ts": 1234567890123 } } } } Response: HTTP/1.1 200 OK Content-Type: application/json { "server_keys": [ { "server_name": "remote.server.example.com" "valid_until_ts": # posix timestamp "verify_keys": { "a.key.id": { # The identifier for a key. key: "" # base64 encoded verification key. } } "old_verify_keys": { "an.old.key.id": { # The identifier for an old key. key: "", # base64 encoded key "expired_ts": 0, # when the key stop being used. } } "tls_fingerprints": [ { "sha256": # fingerprint } ] "signatures": { "remote.server.example.com": {...} "this.server.example.com": {...} } } ] } """ isLeaf = True def __init__(self, hs): self.keyring = hs.get_keyring() self.store = hs.get_datastore() self.version_string = hs.version_string self.clock = hs.get_clock() def render_GET(self, request): self.async_render_GET(request) return NOT_DONE_YET @request_handler() @defer.inlineCallbacks def async_render_GET(self, request): if len(request.postpath) == 1: server, = request.postpath query = {server: {}} elif len(request.postpath) == 2: server, key_id = request.postpath minimum_valid_until_ts = parse_integer( request, "minimum_valid_until_ts" ) arguments = {} if minimum_valid_until_ts is not None: arguments["minimum_valid_until_ts"] = minimum_valid_until_ts query = {server: {key_id: arguments}} else: raise SynapseError( 404, "Not found %r" % request.postpath, Codes.NOT_FOUND ) yield self.query_keys(request, query, query_remote_on_cache_miss=True) def render_POST(self, request): self.async_render_POST(request) return NOT_DONE_YET @request_handler() @defer.inlineCallbacks def async_render_POST(self, request): content = parse_json_object_from_request(request) query = content["server_keys"] yield self.query_keys(request, query, query_remote_on_cache_miss=True) @defer.inlineCallbacks def query_keys(self, request, query, query_remote_on_cache_miss=False): logger.info("Handling query for keys %r", query) store_queries = [] for server_name, key_ids in query.items(): if not key_ids: key_ids = (None,) for key_id in key_ids: store_queries.append((server_name, key_id, None)) cached = yield self.store.get_server_keys_json(store_queries) json_results = set() time_now_ms = self.clock.time_msec() cache_misses = dict() for (server_name, key_id, from_server), results in cached.items(): results = [ (result["ts_added_ms"], result) for result in results ] if not results and key_id is not None: cache_misses.setdefault(server_name, set()).add(key_id) continue if key_id is not None: ts_added_ms, most_recent_result = max(results) ts_valid_until_ms = most_recent_result["ts_valid_until_ms"] req_key = query.get(server_name, {}).get(key_id, {}) req_valid_until = req_key.get("minimum_valid_until_ts") miss = False if req_valid_until is not None: if ts_valid_until_ms < req_valid_until: logger.debug( "Cached response for %r/%r is older than requested" ": valid_until (%r) < minimum_valid_until (%r)", server_name, key_id, ts_valid_until_ms, req_valid_until ) miss = True else: logger.debug( "Cached response for %r/%r is newer than requested" ": valid_until (%r) >= minimum_valid_until (%r)", server_name, key_id, ts_valid_until_ms, req_valid_until ) elif (ts_added_ms + ts_valid_until_ms) / 2 < time_now_ms: logger.debug( "Cached response for %r/%r is too old" ": (added (%r) + valid_until (%r)) / 2 < now (%r)", server_name, key_id, ts_added_ms, ts_valid_until_ms, time_now_ms ) # We more than half way through the lifetime of the # response. We should fetch a fresh copy. miss = True else: logger.debug( "Cached response for %r/%r is still valid" ": (added (%r) + valid_until (%r)) / 2 < now (%r)", server_name, key_id, ts_added_ms, ts_valid_until_ms, time_now_ms ) if miss: cache_misses.setdefault(server_name, set()).add(key_id) json_results.add(bytes(most_recent_result["key_json"])) else: for ts_added, result in results: json_results.add(bytes(result["key_json"])) if cache_misses and query_remote_on_cache_miss: for server_name, key_ids in cache_misses.items(): try: yield self.keyring.get_server_verify_key_v2_direct( server_name, key_ids ) except KeyLookupError as e: logger.info("Failed to fetch key: %s", e) except: logger.exception("Failed to get key for %r", server_name) yield self.query_keys( request, query, query_remote_on_cache_miss=False ) else: result_io = BytesIO() result_io.write(b"{\"server_keys\":") sep = b"[" for json_bytes in json_results: result_io.write(sep) result_io.write(json_bytes) sep = b"," if sep == b"[": result_io.write(sep) result_io.write(b"]}") respond_with_json_bytes( request, 200, result_io.getvalue(), version_string=self.version_string ) synapse-0.24.0/synapse/rest/media/000077500000000000000000000000001317335640100170005ustar00rootroot00000000000000synapse-0.24.0/synapse/rest/media/__init__.py000066400000000000000000000000001317335640100210770ustar00rootroot00000000000000synapse-0.24.0/synapse/rest/media/v0/000077500000000000000000000000001317335640100173255ustar00rootroot00000000000000synapse-0.24.0/synapse/rest/media/v0/__init__.py000066400000000000000000000000001317335640100214240ustar00rootroot00000000000000synapse-0.24.0/synapse/rest/media/v0/content_repository.py000066400000000000000000000070461317335640100236570ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.http.server import respond_with_json_bytes, finish_request from synapse.api.errors import ( Codes, cs_error ) from twisted.protocols.basic import FileSender from twisted.web import server, resource import base64 import simplejson as json import logging import os import re logger = logging.getLogger(__name__) class ContentRepoResource(resource.Resource): """Provides file uploading and downloading. Uploads are POSTed to wherever this Resource is linked to. This resource returns a "content token" which can be used to GET this content again. The token is typically a path, but it may not be. Tokens can expire, be one-time uses, etc. In this case, the token is a path to the file and contains 3 interesting sections: - User ID base64d (for namespacing content to each user) - random 24 char string - Content type base64d (so we can return it when clients GET it) """ isLeaf = True def __init__(self, hs, directory): resource.Resource.__init__(self) self.hs = hs self.directory = directory def render_GET(self, request): # no auth here on purpose, to allow anyone to view, even across home # servers. # TODO: A little crude here, we could do this better. filename = request.path.split('/')[-1] # be paranoid filename = re.sub("[^0-9A-z.-_]", "", filename) file_path = self.directory + "/" + filename logger.debug("Searching for %s", file_path) if os.path.isfile(file_path): # filename has the content type base64_contentype = filename.split(".")[1] content_type = base64.urlsafe_b64decode(base64_contentype) logger.info("Sending file %s", file_path) f = open(file_path, 'rb') request.setHeader('Content-Type', content_type) # cache for at least a day. # XXX: we might want to turn this off for data we don't want to # recommend caching as it's sensitive or private - or at least # select private. don't bother setting Expires as all our matrix # clients are smart enough to be happy with Cache-Control (right?) request.setHeader( "Cache-Control", "public,max-age=86400,s-maxage=86400" ) d = FileSender().beginFileTransfer(f, request) # after the file has been sent, clean up and finish the request def cbFinished(ignored): f.close() finish_request(request) d.addCallback(cbFinished) else: respond_with_json_bytes( request, 404, json.dumps(cs_error("Not found", code=Codes.NOT_FOUND)), send_cors=True) return server.NOT_DONE_YET def render_OPTIONS(self, request): respond_with_json_bytes(request, 200, {}, send_cors=True) return server.NOT_DONE_YET synapse-0.24.0/synapse/rest/media/v1/000077500000000000000000000000001317335640100173265ustar00rootroot00000000000000synapse-0.24.0/synapse/rest/media/v1/__init__.py000066400000000000000000000027561317335640100214510ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import PIL.Image # check for JPEG support. try: PIL.Image._getdecoder("rgb", "jpeg", None) except IOError as e: if str(e).startswith("decoder jpeg not available"): raise Exception( "FATAL: jpeg codec not supported. Install pillow correctly! " " 'sudo apt-get install libjpeg-dev' then 'pip uninstall pillow &&" " pip install pillow --user'" ) except Exception: # any other exception is fine pass # check for PNG support. try: PIL.Image._getdecoder("rgb", "zip", None) except IOError as e: if str(e).startswith("decoder zip not available"): raise Exception( "FATAL: zip codec not supported. Install pillow correctly! " " 'sudo apt-get install libjpeg-dev' then 'pip uninstall pillow &&" " pip install pillow --user'" ) except Exception: # any other exception is fine pass synapse-0.24.0/synapse/rest/media/v1/_base.py000066400000000000000000000066221317335640100207570ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from synapse.http.server import respond_with_json, finish_request from synapse.api.errors import ( cs_error, Codes, SynapseError ) from twisted.internet import defer from twisted.protocols.basic import FileSender from synapse.util.stringutils import is_ascii import os import logging import urllib import urlparse logger = logging.getLogger(__name__) def parse_media_id(request): try: # This allows users to append e.g. /test.png to the URL. Useful for # clients that parse the URL to see content type. server_name, media_id = request.postpath[:2] file_name = None if len(request.postpath) > 2: try: file_name = urlparse.unquote(request.postpath[-1]).decode("utf-8") except UnicodeDecodeError: pass return server_name, media_id, file_name except: raise SynapseError( 404, "Invalid media id token %r" % (request.postpath,), Codes.UNKNOWN, ) def respond_404(request): respond_with_json( request, 404, cs_error( "Not found %r" % (request.postpath,), code=Codes.NOT_FOUND, ), send_cors=True ) @defer.inlineCallbacks def respond_with_file(request, media_type, file_path, file_size=None, upload_name=None): logger.debug("Responding with %r", file_path) if os.path.isfile(file_path): request.setHeader(b"Content-Type", media_type.encode("UTF-8")) if upload_name: if is_ascii(upload_name): request.setHeader( b"Content-Disposition", b"inline; filename=%s" % ( urllib.quote(upload_name.encode("utf-8")), ), ) else: request.setHeader( b"Content-Disposition", b"inline; filename*=utf-8''%s" % ( urllib.quote(upload_name.encode("utf-8")), ), ) # cache for at least a day. # XXX: we might want to turn this off for data we don't want to # recommend caching as it's sensitive or private - or at least # select private. don't bother setting Expires as all our # clients are smart enough to be happy with Cache-Control request.setHeader( b"Cache-Control", b"public,max-age=86400,s-maxage=86400" ) if file_size is None: stat = os.stat(file_path) file_size = stat.st_size request.setHeader( b"Content-Length", b"%d" % (file_size,) ) with open(file_path, "rb") as f: yield FileSender().beginFileTransfer(f, request) finish_request(request) else: respond_404(request) synapse-0.24.0/synapse/rest/media/v1/download_resource.py000066400000000000000000000077541317335640100234330ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import synapse.http.servlet from ._base import parse_media_id, respond_with_file, respond_404 from twisted.web.resource import Resource from synapse.http.server import request_handler, set_cors_headers from twisted.web.server import NOT_DONE_YET from twisted.internet import defer import logging logger = logging.getLogger(__name__) class DownloadResource(Resource): isLeaf = True def __init__(self, hs, media_repo): Resource.__init__(self) self.filepaths = media_repo.filepaths self.media_repo = media_repo self.server_name = hs.hostname self.store = hs.get_datastore() self.version_string = hs.version_string self.clock = hs.get_clock() def render_GET(self, request): self._async_render_GET(request) return NOT_DONE_YET @request_handler() @defer.inlineCallbacks def _async_render_GET(self, request): set_cors_headers(request) request.setHeader( "Content-Security-Policy", "default-src 'none';" " script-src 'none';" " plugin-types application/pdf;" " style-src 'unsafe-inline';" " object-src 'self';" ) server_name, media_id, name = parse_media_id(request) if server_name == self.server_name: yield self._respond_local_file(request, media_id, name) else: yield self._respond_remote_file( request, server_name, media_id, name ) @defer.inlineCallbacks def _respond_local_file(self, request, media_id, name): media_info = yield self.store.get_local_media(media_id) if not media_info or media_info["quarantined_by"]: respond_404(request) return media_type = media_info["media_type"] media_length = media_info["media_length"] upload_name = name if name else media_info["upload_name"] if media_info["url_cache"]: # TODO: Check the file still exists, if it doesn't we can redownload # it from the url `media_info["url_cache"]` file_path = self.filepaths.url_cache_filepath(media_id) else: file_path = self.filepaths.local_media_filepath(media_id) yield respond_with_file( request, media_type, file_path, media_length, upload_name=upload_name, ) @defer.inlineCallbacks def _respond_remote_file(self, request, server_name, media_id, name): # don't forward requests for remote media if allow_remote is false allow_remote = synapse.http.servlet.parse_boolean( request, "allow_remote", default=True) if not allow_remote: logger.info( "Rejecting request for remote media %s/%s due to allow_remote", server_name, media_id, ) respond_404(request) return media_info = yield self.media_repo.get_remote_media(server_name, media_id) media_type = media_info["media_type"] media_length = media_info["media_length"] filesystem_id = media_info["filesystem_id"] upload_name = name if name else media_info["upload_name"] file_path = self.filepaths.remote_media_filepath( server_name, filesystem_id ) yield respond_with_file( request, media_type, file_path, media_length, upload_name=upload_name, ) synapse-0.24.0/synapse/rest/media/v1/filepath.py000066400000000000000000000164641317335640100215070ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import re import functools NEW_FORMAT_ID_RE = re.compile(r"^\d\d\d\d-\d\d-\d\d") def _wrap_in_base_path(func): """Takes a function that returns a relative path and turns it into an absolute path based on the location of the primary media store """ @functools.wraps(func) def _wrapped(self, *args, **kwargs): path = func(self, *args, **kwargs) return os.path.join(self.base_path, path) return _wrapped class MediaFilePaths(object): """Describes where files are stored on disk. Most of the functions have a `*_rel` variant which returns a file path that is relative to the base media store path. This is mainly used when we want to write to the backup media store (when one is configured) """ def __init__(self, primary_base_path): self.base_path = primary_base_path def default_thumbnail_rel(self, default_top_level, default_sub_type, width, height, content_type, method): top_level_type, sub_type = content_type.split("/") file_name = "%i-%i-%s-%s-%s" % ( width, height, top_level_type, sub_type, method ) return os.path.join( "default_thumbnails", default_top_level, default_sub_type, file_name ) default_thumbnail = _wrap_in_base_path(default_thumbnail_rel) def local_media_filepath_rel(self, media_id): return os.path.join( "local_content", media_id[0:2], media_id[2:4], media_id[4:] ) local_media_filepath = _wrap_in_base_path(local_media_filepath_rel) def local_media_thumbnail_rel(self, media_id, width, height, content_type, method): top_level_type, sub_type = content_type.split("/") file_name = "%i-%i-%s-%s-%s" % ( width, height, top_level_type, sub_type, method ) return os.path.join( "local_thumbnails", media_id[0:2], media_id[2:4], media_id[4:], file_name ) local_media_thumbnail = _wrap_in_base_path(local_media_thumbnail_rel) def remote_media_filepath_rel(self, server_name, file_id): return os.path.join( "remote_content", server_name, file_id[0:2], file_id[2:4], file_id[4:] ) remote_media_filepath = _wrap_in_base_path(remote_media_filepath_rel) def remote_media_thumbnail_rel(self, server_name, file_id, width, height, content_type, method): top_level_type, sub_type = content_type.split("/") file_name = "%i-%i-%s-%s" % (width, height, top_level_type, sub_type) return os.path.join( "remote_thumbnail", server_name, file_id[0:2], file_id[2:4], file_id[4:], file_name ) remote_media_thumbnail = _wrap_in_base_path(remote_media_thumbnail_rel) def remote_media_thumbnail_dir(self, server_name, file_id): return os.path.join( self.base_path, "remote_thumbnail", server_name, file_id[0:2], file_id[2:4], file_id[4:], ) def url_cache_filepath_rel(self, media_id): if NEW_FORMAT_ID_RE.match(media_id): # Media id is of the form # E.g.: 2017-09-28-fsdRDt24DS234dsf return os.path.join( "url_cache", media_id[:10], media_id[11:] ) else: return os.path.join( "url_cache", media_id[0:2], media_id[2:4], media_id[4:], ) url_cache_filepath = _wrap_in_base_path(url_cache_filepath_rel) def url_cache_filepath_dirs_to_delete(self, media_id): "The dirs to try and remove if we delete the media_id file" if NEW_FORMAT_ID_RE.match(media_id): return [ os.path.join( self.base_path, "url_cache", media_id[:10], ), ] else: return [ os.path.join( self.base_path, "url_cache", media_id[0:2], media_id[2:4], ), os.path.join( self.base_path, "url_cache", media_id[0:2], ), ] def url_cache_thumbnail_rel(self, media_id, width, height, content_type, method): # Media id is of the form # E.g.: 2017-09-28-fsdRDt24DS234dsf top_level_type, sub_type = content_type.split("/") file_name = "%i-%i-%s-%s-%s" % ( width, height, top_level_type, sub_type, method ) if NEW_FORMAT_ID_RE.match(media_id): return os.path.join( "url_cache_thumbnails", media_id[:10], media_id[11:], file_name ) else: return os.path.join( "url_cache_thumbnails", media_id[0:2], media_id[2:4], media_id[4:], file_name ) url_cache_thumbnail = _wrap_in_base_path(url_cache_thumbnail_rel) def url_cache_thumbnail_directory(self, media_id): # Media id is of the form # E.g.: 2017-09-28-fsdRDt24DS234dsf if NEW_FORMAT_ID_RE.match(media_id): return os.path.join( self.base_path, "url_cache_thumbnails", media_id[:10], media_id[11:], ) else: return os.path.join( self.base_path, "url_cache_thumbnails", media_id[0:2], media_id[2:4], media_id[4:], ) def url_cache_thumbnail_dirs_to_delete(self, media_id): "The dirs to try and remove if we delete the media_id thumbnails" # Media id is of the form # E.g.: 2017-09-28-fsdRDt24DS234dsf if NEW_FORMAT_ID_RE.match(media_id): return [ os.path.join( self.base_path, "url_cache_thumbnails", media_id[:10], media_id[11:], ), os.path.join( self.base_path, "url_cache_thumbnails", media_id[:10], ), ] else: return [ os.path.join( self.base_path, "url_cache_thumbnails", media_id[0:2], media_id[2:4], media_id[4:], ), os.path.join( self.base_path, "url_cache_thumbnails", media_id[0:2], media_id[2:4], ), os.path.join( self.base_path, "url_cache_thumbnails", media_id[0:2], ), ] synapse-0.24.0/synapse/rest/media/v1/identicon_resource.py000066400000000000000000000041111317335640100235600ustar00rootroot00000000000000# Copyright 2015, 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from pydenticon import Generator from twisted.web.resource import Resource FOREGROUND = [ "rgb(45,79,255)", "rgb(254,180,44)", "rgb(226,121,234)", "rgb(30,179,253)", "rgb(232,77,65)", "rgb(49,203,115)", "rgb(141,69,170)" ] BACKGROUND = "rgb(224,224,224)" SIZE = 5 class IdenticonResource(Resource): isLeaf = True def __init__(self): Resource.__init__(self) self.generator = Generator( SIZE, SIZE, foreground=FOREGROUND, background=BACKGROUND, ) def generate_identicon(self, name, width, height): v_padding = width % SIZE h_padding = height % SIZE top_padding = v_padding // 2 left_padding = h_padding // 2 bottom_padding = v_padding - top_padding right_padding = h_padding - left_padding width -= v_padding height -= h_padding padding = (top_padding, bottom_padding, left_padding, right_padding) identicon = self.generator.generate( name, width, height, padding=padding ) return identicon def render_GET(self, request): name = "/".join(request.postpath) width = int(request.args.get("width", [96])[0]) height = int(request.args.get("height", [96])[0]) identicon_bytes = self.generate_identicon(name, width, height) request.setHeader(b"Content-Type", b"image/png") request.setHeader( b"Cache-Control", b"public,max-age=86400,s-maxage=86400" ) return identicon_bytes synapse-0.24.0/synapse/rest/media/v1/media_repository.py000066400000000000000000000550531317335640100232660ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2014-2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.internet import defer, threads import twisted.internet.error import twisted.web.http from twisted.web.resource import Resource from .upload_resource import UploadResource from .download_resource import DownloadResource from .thumbnail_resource import ThumbnailResource from .identicon_resource import IdenticonResource from .preview_url_resource import PreviewUrlResource from .filepath import MediaFilePaths from .thumbnailer import Thumbnailer from synapse.http.matrixfederationclient import MatrixFederationHttpClient from synapse.util.stringutils import random_string from synapse.api.errors import SynapseError, HttpResponseException, \ NotFoundError from synapse.util.async import Linearizer from synapse.util.stringutils import is_ascii from synapse.util.logcontext import make_deferred_yieldable, preserve_fn from synapse.util.retryutils import NotRetryingDestination import os import errno import shutil import cgi import logging import urlparse logger = logging.getLogger(__name__) UPDATE_RECENTLY_ACCESSED_REMOTES_TS = 60 * 1000 class MediaRepository(object): def __init__(self, hs): self.auth = hs.get_auth() self.client = MatrixFederationHttpClient(hs) self.clock = hs.get_clock() self.server_name = hs.hostname self.store = hs.get_datastore() self.max_upload_size = hs.config.max_upload_size self.max_image_pixels = hs.config.max_image_pixels self.primary_base_path = hs.config.media_store_path self.filepaths = MediaFilePaths(self.primary_base_path) self.backup_base_path = hs.config.backup_media_store_path self.synchronous_backup_media_store = hs.config.synchronous_backup_media_store self.dynamic_thumbnails = hs.config.dynamic_thumbnails self.thumbnail_requirements = hs.config.thumbnail_requirements self.remote_media_linearizer = Linearizer(name="media_remote") self.recently_accessed_remotes = set() self.clock.looping_call( self._update_recently_accessed_remotes, UPDATE_RECENTLY_ACCESSED_REMOTES_TS ) @defer.inlineCallbacks def _update_recently_accessed_remotes(self): media = self.recently_accessed_remotes self.recently_accessed_remotes = set() yield self.store.update_cached_last_access_time( media, self.clock.time_msec() ) @staticmethod def _makedirs(filepath): dirname = os.path.dirname(filepath) if not os.path.exists(dirname): os.makedirs(dirname) @staticmethod def _write_file_synchronously(source, fname): """Write `source` to the path `fname` synchronously. Should be called from a thread. Args: source: A file like object to be written fname (str): Path to write to """ MediaRepository._makedirs(fname) source.seek(0) # Ensure we read from the start of the file with open(fname, "wb") as f: shutil.copyfileobj(source, f) @defer.inlineCallbacks def write_to_file_and_backup(self, source, path): """Write `source` to the on disk media store, and also the backup store if configured. Args: source: A file like object that should be written path (str): Relative path to write file to Returns: Deferred[str]: the file path written to in the primary media store """ fname = os.path.join(self.primary_base_path, path) # Write to the main repository yield make_deferred_yieldable(threads.deferToThread( self._write_file_synchronously, source, fname, )) # Write to backup repository yield self.copy_to_backup(path) defer.returnValue(fname) @defer.inlineCallbacks def copy_to_backup(self, path): """Copy a file from the primary to backup media store, if configured. Args: path(str): Relative path to write file to """ if self.backup_base_path: primary_fname = os.path.join(self.primary_base_path, path) backup_fname = os.path.join(self.backup_base_path, path) # We can either wait for successful writing to the backup repository # or write in the background and immediately return if self.synchronous_backup_media_store: yield make_deferred_yieldable(threads.deferToThread( shutil.copyfile, primary_fname, backup_fname, )) else: preserve_fn(threads.deferToThread)( shutil.copyfile, primary_fname, backup_fname, ) @defer.inlineCallbacks def create_content(self, media_type, upload_name, content, content_length, auth_user): """Store uploaded content for a local user and return the mxc URL Args: media_type(str): The content type of the file upload_name(str): The name of the file content: A file like object that is the content to store content_length(int): The length of the content auth_user(str): The user_id of the uploader Returns: Deferred[str]: The mxc url of the stored content """ media_id = random_string(24) fname = yield self.write_to_file_and_backup( content, self.filepaths.local_media_filepath_rel(media_id) ) logger.info("Stored local media in file %r", fname) yield self.store.store_local_media( media_id=media_id, media_type=media_type, time_now_ms=self.clock.time_msec(), upload_name=upload_name, media_length=content_length, user_id=auth_user, ) media_info = { "media_type": media_type, "media_length": content_length, } yield self._generate_thumbnails(None, media_id, media_info) defer.returnValue("mxc://%s/%s" % (self.server_name, media_id)) @defer.inlineCallbacks def get_remote_media(self, server_name, media_id): key = (server_name, media_id) with (yield self.remote_media_linearizer.queue(key)): media_info = yield self._get_remote_media_impl(server_name, media_id) defer.returnValue(media_info) @defer.inlineCallbacks def _get_remote_media_impl(self, server_name, media_id): media_info = yield self.store.get_cached_remote_media( server_name, media_id ) if not media_info: media_info = yield self._download_remote_file( server_name, media_id ) elif media_info["quarantined_by"]: raise NotFoundError() else: self.recently_accessed_remotes.add((server_name, media_id)) yield self.store.update_cached_last_access_time( [(server_name, media_id)], self.clock.time_msec() ) defer.returnValue(media_info) @defer.inlineCallbacks def _download_remote_file(self, server_name, media_id): file_id = random_string(24) fpath = self.filepaths.remote_media_filepath_rel( server_name, file_id ) fname = os.path.join(self.primary_base_path, fpath) self._makedirs(fname) try: with open(fname, "wb") as f: request_path = "/".join(( "/_matrix/media/v1/download", server_name, media_id, )) try: length, headers = yield self.client.get_file( server_name, request_path, output_stream=f, max_size=self.max_upload_size, args={ # tell the remote server to 404 if it doesn't # recognise the server_name, to make sure we don't # end up with a routing loop. "allow_remote": "false", } ) except twisted.internet.error.DNSLookupError as e: logger.warn("HTTP error fetching remote media %s/%s: %r", server_name, media_id, e) raise NotFoundError() except HttpResponseException as e: logger.warn("HTTP error fetching remote media %s/%s: %s", server_name, media_id, e.response) if e.code == twisted.web.http.NOT_FOUND: raise SynapseError.from_http_response_exception(e) raise SynapseError(502, "Failed to fetch remote media") except SynapseError: logger.exception("Failed to fetch remote media %s/%s", server_name, media_id) raise except NotRetryingDestination: logger.warn("Not retrying destination %r", server_name) raise SynapseError(502, "Failed to fetch remote media") except Exception: logger.exception("Failed to fetch remote media %s/%s", server_name, media_id) raise SynapseError(502, "Failed to fetch remote media") yield self.copy_to_backup(fpath) media_type = headers["Content-Type"][0] time_now_ms = self.clock.time_msec() content_disposition = headers.get("Content-Disposition", None) if content_disposition: _, params = cgi.parse_header(content_disposition[0],) upload_name = None # First check if there is a valid UTF-8 filename upload_name_utf8 = params.get("filename*", None) if upload_name_utf8: if upload_name_utf8.lower().startswith("utf-8''"): upload_name = upload_name_utf8[7:] # If there isn't check for an ascii name. if not upload_name: upload_name_ascii = params.get("filename", None) if upload_name_ascii and is_ascii(upload_name_ascii): upload_name = upload_name_ascii if upload_name: upload_name = urlparse.unquote(upload_name) try: upload_name = upload_name.decode("utf-8") except UnicodeDecodeError: upload_name = None else: upload_name = None logger.info("Stored remote media in file %r", fname) yield self.store.store_cached_remote_media( origin=server_name, media_id=media_id, media_type=media_type, time_now_ms=self.clock.time_msec(), upload_name=upload_name, media_length=length, filesystem_id=file_id, ) except: os.remove(fname) raise media_info = { "media_type": media_type, "media_length": length, "upload_name": upload_name, "created_ts": time_now_ms, "filesystem_id": file_id, } yield self._generate_thumbnails( server_name, media_id, media_info ) defer.returnValue(media_info) def _get_thumbnail_requirements(self, media_type): return self.thumbnail_requirements.get(media_type, ()) def _generate_thumbnail(self, thumbnailer, t_width, t_height, t_method, t_type): m_width = thumbnailer.width m_height = thumbnailer.height if m_width * m_height >= self.max_image_pixels: logger.info( "Image too large to thumbnail %r x %r > %r", m_width, m_height, self.max_image_pixels ) return if t_method == "crop": t_byte_source = thumbnailer.crop(t_width, t_height, t_type) elif t_method == "scale": t_width, t_height = thumbnailer.aspect(t_width, t_height) t_width = min(m_width, t_width) t_height = min(m_height, t_height) t_byte_source = thumbnailer.scale(t_width, t_height, t_type) else: t_byte_source = None return t_byte_source @defer.inlineCallbacks def generate_local_exact_thumbnail(self, media_id, t_width, t_height, t_method, t_type): input_path = self.filepaths.local_media_filepath(media_id) thumbnailer = Thumbnailer(input_path) t_byte_source = yield make_deferred_yieldable(threads.deferToThread( self._generate_thumbnail, thumbnailer, t_width, t_height, t_method, t_type )) if t_byte_source: try: output_path = yield self.write_to_file_and_backup( t_byte_source, self.filepaths.local_media_thumbnail_rel( media_id, t_width, t_height, t_type, t_method ) ) finally: t_byte_source.close() logger.info("Stored thumbnail in file %r", output_path) t_len = os.path.getsize(output_path) yield self.store.store_local_thumbnail( media_id, t_width, t_height, t_type, t_method, t_len ) defer.returnValue(output_path) @defer.inlineCallbacks def generate_remote_exact_thumbnail(self, server_name, file_id, media_id, t_width, t_height, t_method, t_type): input_path = self.filepaths.remote_media_filepath(server_name, file_id) thumbnailer = Thumbnailer(input_path) t_byte_source = yield make_deferred_yieldable(threads.deferToThread( self._generate_thumbnail, thumbnailer, t_width, t_height, t_method, t_type )) if t_byte_source: try: output_path = yield self.write_to_file_and_backup( t_byte_source, self.filepaths.remote_media_thumbnail_rel( server_name, file_id, t_width, t_height, t_type, t_method ) ) finally: t_byte_source.close() logger.info("Stored thumbnail in file %r", output_path) t_len = os.path.getsize(output_path) yield self.store.store_remote_media_thumbnail( server_name, media_id, file_id, t_width, t_height, t_type, t_method, t_len ) defer.returnValue(output_path) @defer.inlineCallbacks def _generate_thumbnails(self, server_name, media_id, media_info, url_cache=False): """Generate and store thumbnails for an image. Args: server_name(str|None): The server name if remote media, else None if local media_id(str) media_info(dict) url_cache(bool): If we are thumbnailing images downloaded for the URL cache, used exclusively by the url previewer Returns: Deferred[dict]: Dict with "width" and "height" keys of original image """ media_type = media_info["media_type"] file_id = media_info.get("filesystem_id") requirements = self._get_thumbnail_requirements(media_type) if not requirements: return if server_name: input_path = self.filepaths.remote_media_filepath(server_name, file_id) elif url_cache: input_path = self.filepaths.url_cache_filepath(media_id) else: input_path = self.filepaths.local_media_filepath(media_id) thumbnailer = Thumbnailer(input_path) m_width = thumbnailer.width m_height = thumbnailer.height if m_width * m_height >= self.max_image_pixels: logger.info( "Image too large to thumbnail %r x %r > %r", m_width, m_height, self.max_image_pixels ) return # We deduplicate the thumbnail sizes by ignoring the cropped versions if # they have the same dimensions of a scaled one. thumbnails = {} for r_width, r_height, r_method, r_type in requirements: if r_method == "crop": thumbnails.setdefault((r_width, r_height, r_type), r_method) elif r_method == "scale": t_width, t_height = thumbnailer.aspect(r_width, r_height) t_width = min(m_width, t_width) t_height = min(m_height, t_height) thumbnails[(t_width, t_height, r_type)] = r_method # Now we generate the thumbnails for each dimension, store it for (t_width, t_height, t_type), t_method in thumbnails.iteritems(): # Work out the correct file name for thumbnail if server_name: file_path = self.filepaths.remote_media_thumbnail_rel( server_name, file_id, t_width, t_height, t_type, t_method ) elif url_cache: file_path = self.filepaths.url_cache_thumbnail_rel( media_id, t_width, t_height, t_type, t_method ) else: file_path = self.filepaths.local_media_thumbnail_rel( media_id, t_width, t_height, t_type, t_method ) # Generate the thumbnail if t_method == "crop": t_byte_source = yield make_deferred_yieldable(threads.deferToThread( thumbnailer.crop, t_width, t_height, t_type, )) elif t_method == "scale": t_byte_source = yield make_deferred_yieldable(threads.deferToThread( thumbnailer.scale, t_width, t_height, t_type, )) else: logger.error("Unrecognized method: %r", t_method) continue if not t_byte_source: continue try: # Write to disk output_path = yield self.write_to_file_and_backup( t_byte_source, file_path, ) finally: t_byte_source.close() t_len = os.path.getsize(output_path) # Write to database if server_name: yield self.store.store_remote_media_thumbnail( server_name, media_id, file_id, t_width, t_height, t_type, t_method, t_len ) else: yield self.store.store_local_thumbnail( media_id, t_width, t_height, t_type, t_method, t_len ) defer.returnValue({ "width": m_width, "height": m_height, }) @defer.inlineCallbacks def delete_old_remote_media(self, before_ts): old_media = yield self.store.get_remote_media_before(before_ts) deleted = 0 for media in old_media: origin = media["media_origin"] media_id = media["media_id"] file_id = media["filesystem_id"] key = (origin, media_id) logger.info("Deleting: %r", key) # TODO: Should we delete from the backup store with (yield self.remote_media_linearizer.queue(key)): full_path = self.filepaths.remote_media_filepath(origin, file_id) try: os.remove(full_path) except OSError as e: logger.warn("Failed to remove file: %r", full_path) if e.errno == errno.ENOENT: pass else: continue thumbnail_dir = self.filepaths.remote_media_thumbnail_dir( origin, file_id ) shutil.rmtree(thumbnail_dir, ignore_errors=True) yield self.store.delete_remote_media(origin, media_id) deleted += 1 defer.returnValue({"deleted": deleted}) class MediaRepositoryResource(Resource): """File uploading and downloading. Uploads are POSTed to a resource which returns a token which is used to GET the download:: => POST /_matrix/media/v1/upload HTTP/1.1 Content-Type: Content-Length: <= HTTP/1.1 200 OK Content-Type: application/json { "content_uri": "mxc:///" } => GET /_matrix/media/v1/download// HTTP/1.1 <= HTTP/1.1 200 OK Content-Type: Content-Disposition: attachment;filename= Clients can get thumbnails by supplying a desired width and height and thumbnailing method:: => GET /_matrix/media/v1/thumbnail/ /?width=&height=&method= HTTP/1.1 <= HTTP/1.1 200 OK Content-Type: image/jpeg or image/png The thumbnail methods are "crop" and "scale". "scale" trys to return an image where either the width or the height is smaller than the requested size. The client should then scale and letterbox the image if it needs to fit within a given rectangle. "crop" trys to return an image where the width and height are close to the requested size and the aspect matches the requested size. The client should scale the image if it needs to fit within a given rectangle. """ def __init__(self, hs): Resource.__init__(self) media_repo = hs.get_media_repository() self.putChild("upload", UploadResource(hs, media_repo)) self.putChild("download", DownloadResource(hs, media_repo)) self.putChild("thumbnail", ThumbnailResource(hs, media_repo)) self.putChild("identicon", IdenticonResource()) if hs.config.url_preview_enabled: self.putChild("preview_url", PreviewUrlResource(hs, media_repo)) synapse-0.24.0/synapse/rest/media/v1/preview_url_resource.py000066400000000000000000000574701317335640100241670ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2016 OpenMarket Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from twisted.web.server import NOT_DONE_YET from twisted.internet import defer from twisted.web.resource import Resource from synapse.api.errors import ( SynapseError, Codes, ) from synapse.util.stringutils import random_string from synapse.util.caches.expiringcache import ExpiringCache from synapse.http.client import SpiderHttpClient from synapse.http.server import ( request_handler, respond_with_json_bytes ) from synapse.util.async import ObservableDeferred from synapse.util.stringutils import is_ascii import os import re import fnmatch import cgi import ujson as json import urlparse import itertools import datetime import errno import shutil import logging logger = logging.getLogger(__name__) class PreviewUrlResource(Resource): isLeaf = True def __init__(self, hs, media_repo): Resource.__init__(self) self.auth = hs.get_auth() self.clock = hs.get_clock() self.version_string = hs.version_string self.filepaths = media_repo.filepaths self.max_spider_size = hs.config.max_spider_size self.server_name = hs.hostname self.store = hs.get_datastore() self.client = SpiderHttpClient(hs) self.media_repo = media_repo self.primary_base_path = media_repo.primary_base_path self.url_preview_url_blacklist = hs.config.url_preview_url_blacklist # simple memory cache mapping urls to OG metadata self.cache = ExpiringCache( cache_name="url_previews", clock=self.clock, # don't spider URLs more often than once an hour expiry_ms=60 * 60 * 1000, ) self.cache.start() self.downloads = {} self._cleaner_loop = self.clock.looping_call( self._expire_url_cache_data, 10 * 1000 ) def render_GET(self, request): self._async_render_GET(request) return NOT_DONE_YET @request_handler() @defer.inlineCallbacks def _async_render_GET(self, request): # XXX: if get_user_by_req fails, what should we do in an async render? requester = yield self.auth.get_user_by_req(request) url = request.args.get("url")[0] if "ts" in request.args: ts = int(request.args.get("ts")[0]) else: ts = self.clock.time_msec() url_tuple = urlparse.urlsplit(url) for entry in self.url_preview_url_blacklist: match = True for attrib in entry: pattern = entry[attrib] value = getattr(url_tuple, attrib) logger.debug(( "Matching attrib '%s' with value '%s' against" " pattern '%s'" ) % (attrib, value, pattern)) if value is None: match = False continue if pattern.startswith('^'): if not re.match(pattern, getattr(url_tuple, attrib)): match = False continue else: if not fnmatch.fnmatch(getattr(url_tuple, attrib), pattern): match = False continue if match: logger.warn( "URL %s blocked by url_blacklist entry %s", url, entry ) raise SynapseError( 403, "URL blocked by url pattern blacklist entry", Codes.UNKNOWN ) # first check the memory cache - good to handle all the clients on this # HS thundering away to preview the same URL at the same time. og = self.cache.get(url) if og: respond_with_json_bytes(request, 200, json.dumps(og), send_cors=True) return # then check the URL cache in the DB (which will also provide us with # historical previews, if we have any) cache_result = yield self.store.get_url_cache(url, ts) if ( cache_result and cache_result["expires_ts"] > ts and cache_result["response_code"] / 100 == 2 ): respond_with_json_bytes( request, 200, cache_result["og"].encode('utf-8'), send_cors=True ) return # Ensure only one download for a given URL is active at a time download = self.downloads.get(url) if download is None: download = self._download_url(url, requester.user) download = ObservableDeferred( download, consumeErrors=True ) self.downloads[url] = download @download.addBoth def callback(media_info): del self.downloads[url] return media_info media_info = yield download.observe() # FIXME: we should probably update our cache now anyway, so that # even if the OG calculation raises, we don't keep hammering on the # remote server. For now, leave it uncached to aid debugging OG # calculation problems logger.debug("got media_info of '%s'" % media_info) if _is_media(media_info['media_type']): dims = yield self.media_repo._generate_thumbnails( None, media_info['filesystem_id'], media_info, url_cache=True, ) og = { "og:description": media_info['download_name'], "og:image": "mxc://%s/%s" % ( self.server_name, media_info['filesystem_id'] ), "og:image:type": media_info['media_type'], "matrix:image:size": media_info['media_length'], } if dims: og["og:image:width"] = dims['width'] og["og:image:height"] = dims['height'] else: logger.warn("Couldn't get dims for %s" % url) # define our OG response for this media elif _is_html(media_info['media_type']): # TODO: somehow stop a big HTML tree from exploding synapse's RAM file = open(media_info['filename']) body = file.read() file.close() # clobber the encoding from the content-type, or default to utf-8 # XXX: this overrides any or XML charset headers in the body # which may pose problems, but so far seems to work okay. match = re.match(r'.*; *charset=(.*?)(;|$)', media_info['media_type'], re.I) encoding = match.group(1) if match else "utf-8" og = decode_and_calc_og(body, media_info['uri'], encoding) # pre-cache the image for posterity # FIXME: it might be cleaner to use the same flow as the main /preview_url # request itself and benefit from the same caching etc. But for now we # just rely on the caching on the master request to speed things up. if 'og:image' in og and og['og:image']: image_info = yield self._download_url( _rebase_url(og['og:image'], media_info['uri']), requester.user ) if _is_media(image_info['media_type']): # TODO: make sure we don't choke on white-on-transparent images dims = yield self.media_repo._generate_thumbnails( None, image_info['filesystem_id'], image_info, url_cache=True, ) if dims: og["og:image:width"] = dims['width'] og["og:image:height"] = dims['height'] else: logger.warn("Couldn't get dims for %s" % og["og:image"]) og["og:image"] = "mxc://%s/%s" % ( self.server_name, image_info['filesystem_id'] ) og["og:image:type"] = image_info['media_type'] og["matrix:image:size"] = image_info['media_length'] else: del og["og:image"] else: logger.warn("Failed to find any OG data in %s", url) og = {} logger.debug("Calculated OG for %s as %s" % (url, og)) # store OG in ephemeral in-memory cache self.cache[url] = og # store OG in history-aware DB cache yield self.store.store_url_cache( url, media_info["response_code"], media_info["etag"], media_info["expires"] + media_info["created_ts"], json.dumps(og), media_info["filesystem_id"], media_info["created_ts"], ) respond_with_json_bytes(request, 200, json.dumps(og), send_cors=True) @defer.inlineCallbacks def _download_url(self, url, user): # TODO: we should probably honour robots.txt... except in practice # we're most likely being explicitly triggered by a human rather than a # bot, so are we really a robot? file_id = datetime.date.today().isoformat() + '_' + random_string(16) fpath = self.filepaths.url_cache_filepath_rel(file_id) fname = os.path.join(self.primary_base_path, fpath) self.media_repo._makedirs(fname) try: with open(fname, "wb") as f: logger.debug("Trying to get url '%s'" % url) length, headers, uri, code = yield self.client.get_file( url, output_stream=f, max_size=self.max_spider_size, ) # FIXME: pass through 404s and other error messages nicely yield self.media_repo.copy_to_backup(fpath) media_type = headers["Content-Type"][0] time_now_ms = self.clock.time_msec() content_disposition = headers.get("Content-Disposition", None) if content_disposition: _, params = cgi.parse_header(content_disposition[0],) download_name = None # First check if there is a valid UTF-8 filename download_name_utf8 = params.get("filename*", None) if download_name_utf8: if download_name_utf8.lower().startswith("utf-8''"): download_name = download_name_utf8[7:] # If there isn't check for an ascii name. if not download_name: download_name_ascii = params.get("filename", None) if download_name_ascii and is_ascii(download_name_ascii): download_name = download_name_ascii if download_name: download_name = urlparse.unquote(download_name) try: download_name = download_name.decode("utf-8") except UnicodeDecodeError: download_name = None else: download_name = None yield self.store.store_local_media( media_id=file_id, media_type=media_type, time_now_ms=self.clock.time_msec(), upload_name=download_name, media_length=length, user_id=user, url_cache=url, ) except Exception as e: os.remove(fname) raise SynapseError( 500, ("Failed to download content: %s" % e), Codes.UNKNOWN ) defer.returnValue({ "media_type": media_type, "media_length": length, "download_name": download_name, "created_ts": time_now_ms, "filesystem_id": file_id, "filename": fname, "uri": uri, "response_code": code, # FIXME: we should calculate a proper expiration based on the # Cache-Control and Expire headers. But for now, assume 1 hour. "expires": 60 * 60 * 1000, "etag": headers["ETag"][0] if "ETag" in headers else None, }) @defer.inlineCallbacks def _expire_url_cache_data(self): """Clean up expired url cache content, media and thumbnails. """ # TODO: Delete from backup media store now = self.clock.time_msec() # First we delete expired url cache entries media_ids = yield self.store.get_expired_url_cache(now) removed_media = [] for media_id in media_ids: fname = self.filepaths.url_cache_filepath(media_id) try: os.remove(fname) except OSError as e: # If the path doesn't exist, meh if e.errno != errno.ENOENT: logger.warn("Failed to remove media: %r: %s", media_id, e) continue removed_media.append(media_id) try: dirs = self.filepaths.url_cache_filepath_dirs_to_delete(media_id) for dir in dirs: os.rmdir(dir) except: pass yield self.store.delete_url_cache(removed_media) if removed_media: logger.info("Deleted %d entries from url cache", len(removed_media)) # Now we delete old images associated with the url cache. # These may be cached for a bit on the client (i.e., they # may have a room open with a preview url thing open). # So we wait a couple of days before deleting, just in case. expire_before = now - 2 * 24 * 60 * 60 * 1000 media_ids = yield self.store.get_url_cache_media_before(expire_before) removed_media = [] for media_id in media_ids: fname = self.filepaths.url_cache_filepath(media_id) try: os.remove(fname) except OSError as e: # If the path doesn't exist, meh if e.errno != errno.ENOENT: logger.warn("Failed to remove media: %r: %s", media_id, e) continue try: dirs = self.filepaths.url_cache_filepath_dirs_to_delete(media_id) for dir in dirs: os.rmdir(dir) except: pass thumbnail_dir = self.filepaths.url_cache_thumbnail_directory(media_id) try: shutil.rmtree(thumbnail_dir) except OSError as e: # If the path doesn't exist, meh if e.errno != errno.ENOENT: logger.warn("Failed to remove media: %r: %s", media_id, e) continue removed_media.append(media_id) try: dirs = self.filepaths.url_cache_thumbnail_dirs_to_delete(media_id) for dir in dirs: os.rmdir(dir) except: pass yield self.store.delete_url_cache_media(removed_media) if removed_media: logger.info("Deleted %d media from url cache", len(removed_media)) def decode_and_calc_og(body, media_uri, request_encoding=None): from lxml import etree try: parser = etree.HTMLParser(recover=True, encoding=request_encoding) tree = etree.fromstring(body, parser) og = _calc_og(tree, media_uri) except UnicodeDecodeError: # blindly try decoding the body as utf-8, which seems to fix # the charset mismatches on https://google.com parser = etree.HTMLParser(recover=True, encoding=request_encoding) tree = etree.fromstring(body.decode('utf-8', 'ignore'), parser) og = _calc_og(tree, media_uri) return og def _calc_og(tree, media_uri): # suck our tree into lxml and define our OG response. # if we see any image URLs in the OG response, then spider them # (although the client could choose to do this by asking for previews of those # URLs to avoid DoSing the server) # "og:type" : "video", # "og:url" : "https://www.youtube.com/watch?v=LXDBoHyjmtw", # "og:site_name" : "YouTube", # "og:video:type" : "application/x-shockwave-flash", # "og:description" : "Fun stuff happening here", # "og:title" : "RemoteJam - Matrix team hack for Disrupt Europe Hackathon", # "og:image" : "https://i.ytimg.com/vi/LXDBoHyjmtw/maxresdefault.jpg", # "og:video:url" : "http://www.youtube.com/v/LXDBoHyjmtw?version=3&autohide=1", # "og:video:width" : "1280" # "og:video:height" : "720", # "og:video:secure_url": "https://www.youtube.com/v/LXDBoHyjmtw?version=3", og = {} for tag in tree.xpath("//*/meta[starts-with(@property, 'og:')]"): if 'content' in tag.attrib: og[tag.attrib['property']] = tag.attrib['content'] # TODO: grab article: meta tags too, e.g.: # "article:publisher" : "https://www.facebook.com/thethudonline" /> # "article:author" content="https://www.facebook.com/thethudonline" /> # "article:tag" content="baby" /> # "article:section" content="Breaking News" /> # "article:published_time" content="2016-03-31T19:58:24+00:00" /> # "article:modified_time" content="2016-04-01T18:31:53+00:00" /> if 'og:title' not in og: # do some basic spidering of the HTML title = tree.xpath("(//title)[1] | (//h1)[1] | (//h2)[1] | (//h3)[1]") if title and title[0].text is not None: og['og:title'] = title[0].text.strip() else: og['og:title'] = None if 'og:image' not in og: # TODO: extract a favicon failing all else meta_image = tree.xpath( "//*/meta[translate(@itemprop, 'IMAGE', 'image')='image']/@content" ) if meta_image: og['og:image'] = _rebase_url(meta_image[0], media_uri) else: # TODO: consider inlined CSS styles as well as width & height attribs images = tree.xpath("//img[@src][number(@width)>10][number(@height)>10]") images = sorted(images, key=lambda i: ( -1 * float(i.attrib['width']) * float(i.attrib['height']) )) if not images: images = tree.xpath("//img[@src]") if images: og['og:image'] = images[0].attrib['src'] if 'og:description' not in og: meta_description = tree.xpath( "//*/meta" "[translate(@name, 'DESCRIPTION', 'description')='description']" "/@content") if meta_description: og['og:description'] = meta_description[0] else: # grab any text nodes which are inside the tag... # unless they are within an HTML5 semantic markup tag... #
,