Browse Source
Merge remote-tracking branch 'origin/next' See merge request famedly/conduit!538merge-requests/560/head v0.6.0
93 changed files with 7034 additions and 2993 deletions
@ -0,0 +1,134 @@
|
||||
|
||||
# Contributor Covenant Code of Conduct |
||||
|
||||
## Our Pledge |
||||
|
||||
We as members, contributors, and leaders pledge to make participation in our |
||||
community a harassment-free experience for everyone, regardless of age, body |
||||
size, visible or invisible disability, ethnicity, sex characteristics, gender |
||||
identity and expression, level of experience, education, socio-economic status, |
||||
nationality, personal appearance, race, caste, color, religion, or sexual |
||||
identity and orientation. |
||||
|
||||
We pledge to act and interact in ways that contribute to an open, welcoming, |
||||
diverse, inclusive, and healthy community. |
||||
|
||||
## Our Standards |
||||
|
||||
Examples of behavior that contributes to a positive environment for our |
||||
community include: |
||||
|
||||
* Demonstrating empathy and kindness toward other people |
||||
* Being respectful of differing opinions, viewpoints, and experiences |
||||
* Giving and gracefully accepting constructive feedback |
||||
* Accepting responsibility and apologizing to those affected by our mistakes, |
||||
and learning from the experience |
||||
* Focusing on what is best not just for us as individuals, but for the overall |
||||
community |
||||
|
||||
Examples of unacceptable behavior include: |
||||
|
||||
* The use of sexualized language or imagery, and sexual attention or advances of |
||||
any kind |
||||
* Trolling, insulting or derogatory comments, and personal or political attacks |
||||
* Public or private harassment |
||||
* Publishing others' private information, such as a physical or email address, |
||||
without their explicit permission |
||||
* Other conduct which could reasonably be considered inappropriate in a |
||||
professional setting |
||||
|
||||
## Enforcement Responsibilities |
||||
|
||||
Community leaders are responsible for clarifying and enforcing our standards of |
||||
acceptable behavior and will take appropriate and fair corrective action in |
||||
response to any behavior that they deem inappropriate, threatening, offensive, |
||||
or harmful. |
||||
|
||||
Community leaders have the right and responsibility to remove, edit, or reject |
||||
comments, commits, code, wiki edits, issues, and other contributions that are |
||||
not aligned to this Code of Conduct, and will communicate reasons for moderation |
||||
decisions when appropriate. |
||||
|
||||
## Scope |
||||
|
||||
This Code of Conduct applies within all community spaces, and also applies when |
||||
an individual is officially representing the community in public spaces. |
||||
Examples of representing our community include using an official e-mail address, |
||||
posting via an official social media account, or acting as an appointed |
||||
representative at an online or offline event. |
||||
|
||||
## Enforcement |
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be |
||||
reported to the community leaders responsible for enforcement over email at |
||||
coc@koesters.xyz or over Matrix at @timo:conduit.rs. |
||||
All complaints will be reviewed and investigated promptly and fairly. |
||||
|
||||
All community leaders are obligated to respect the privacy and security of the |
||||
reporter of any incident. |
||||
|
||||
## Enforcement Guidelines |
||||
|
||||
Community leaders will follow these Community Impact Guidelines in determining |
||||
the consequences for any action they deem in violation of this Code of Conduct: |
||||
|
||||
### 1. Correction |
||||
|
||||
**Community Impact**: Use of inappropriate language or other behavior deemed |
||||
unprofessional or unwelcome in the community. |
||||
|
||||
**Consequence**: A private, written warning from community leaders, providing |
||||
clarity around the nature of the violation and an explanation of why the |
||||
behavior was inappropriate. A public apology may be requested. |
||||
|
||||
### 2. Warning |
||||
|
||||
**Community Impact**: A violation through a single incident or series of |
||||
actions. |
||||
|
||||
**Consequence**: A warning with consequences for continued behavior. No |
||||
interaction with the people involved, including unsolicited interaction with |
||||
those enforcing the Code of Conduct, for a specified period of time. This |
||||
includes avoiding interactions in community spaces as well as external channels |
||||
like social media. Violating these terms may lead to a temporary or permanent |
||||
ban. |
||||
|
||||
### 3. Temporary Ban |
||||
|
||||
**Community Impact**: A serious violation of community standards, including |
||||
sustained inappropriate behavior. |
||||
|
||||
**Consequence**: A temporary ban from any sort of interaction or public |
||||
communication with the community for a specified period of time. No public or |
||||
private interaction with the people involved, including unsolicited interaction |
||||
with those enforcing the Code of Conduct, is allowed during this period. |
||||
Violating these terms may lead to a permanent ban. |
||||
|
||||
### 4. Permanent Ban |
||||
|
||||
**Community Impact**: Demonstrating a pattern of violation of community |
||||
standards, including sustained inappropriate behavior, harassment of an |
||||
individual, or aggression toward or disparagement of classes of individuals. |
||||
|
||||
**Consequence**: A permanent ban from any sort of public interaction within the |
||||
community. |
||||
|
||||
## Attribution |
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage], |
||||
version 2.1, available at |
||||
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1]. |
||||
|
||||
Community Impact Guidelines were inspired by |
||||
[Mozilla's code of conduct enforcement ladder][Mozilla CoC]. |
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at |
||||
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at |
||||
[https://www.contributor-covenant.org/translations][translations]. |
||||
|
||||
[homepage]: https://www.contributor-covenant.org |
||||
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html |
||||
[Mozilla CoC]: https://github.com/mozilla/diversity |
||||
[FAQ]: https://www.contributor-covenant.org/faq |
||||
[translations]: https://www.contributor-covenant.org/translations |
||||
|
||||
@ -1,23 +0,0 @@
|
||||
[build.env] |
||||
# CI uses an S3 endpoint to store sccache artifacts, so their config needs to |
||||
# be available in the cross container as well |
||||
passthrough = [ |
||||
"RUSTC_WRAPPER", |
||||
"AWS_ACCESS_KEY_ID", |
||||
"AWS_SECRET_ACCESS_KEY", |
||||
"SCCACHE_BUCKET", |
||||
"SCCACHE_ENDPOINT", |
||||
"SCCACHE_S3_USE_SSL", |
||||
] |
||||
|
||||
[target.aarch64-unknown-linux-musl] |
||||
image = "registry.gitlab.com/jfowl/conduit-containers/rust-cross-aarch64-unknown-linux-musl:latest" |
||||
|
||||
[target.arm-unknown-linux-musleabihf] |
||||
image = "registry.gitlab.com/jfowl/conduit-containers/rust-cross-arm-unknown-linux-musleabihf:latest" |
||||
|
||||
[target.armv7-unknown-linux-musleabihf] |
||||
image = "registry.gitlab.com/jfowl/conduit-containers/rust-cross-armv7-unknown-linux-musleabihf:latest" |
||||
|
||||
[target.x86_64-unknown-linux-musl] |
||||
image = "registry.gitlab.com/jfowl/conduit-containers/rust-cross-x86_64-unknown-linux-musl@sha256:b6d689e42f0236c8a38b961bca2a12086018b85ed20e0826310421daf182e2bb" |
||||
@ -0,0 +1,48 @@
|
||||
# For use in our CI only. This requires a build artifact created by a previous run pipline stage to be placed in cached_target/release/conduit |
||||
FROM registry.gitlab.com/jfowl/conduit-containers/rust-with-tools:commit-16a08e9b as builder |
||||
#FROM rust:latest as builder |
||||
|
||||
WORKDIR /workdir |
||||
|
||||
ARG RUSTC_WRAPPER |
||||
ARG AWS_ACCESS_KEY_ID |
||||
ARG AWS_SECRET_ACCESS_KEY |
||||
ARG SCCACHE_BUCKET |
||||
ARG SCCACHE_ENDPOINT |
||||
ARG SCCACHE_S3_USE_SSL |
||||
|
||||
COPY . . |
||||
RUN mkdir -p target/release |
||||
RUN test -e cached_target/release/conduit && cp cached_target/release/conduit target/release/conduit || cargo build --release |
||||
|
||||
## Actual image |
||||
FROM debian:bullseye |
||||
WORKDIR /workdir |
||||
|
||||
# Install caddy |
||||
RUN apt-get update && apt-get install -y debian-keyring debian-archive-keyring apt-transport-https curl && curl -1sLf 'https://dl.cloudsmith.io/public/caddy/testing/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-testing-archive-keyring.gpg && curl -1sLf 'https://dl.cloudsmith.io/public/caddy/testing/debian.deb.txt' | tee /etc/apt/sources.list.d/caddy-testing.list && apt-get update && apt-get install -y caddy |
||||
|
||||
COPY conduit-example.toml conduit.toml |
||||
COPY complement/caddy.json caddy.json |
||||
|
||||
ENV SERVER_NAME=localhost |
||||
ENV CONDUIT_CONFIG=/workdir/conduit.toml |
||||
|
||||
RUN sed -i "s/port = 6167/port = 8008/g" conduit.toml |
||||
RUN echo "allow_federation = true" >> conduit.toml |
||||
RUN echo "allow_check_for_updates = true" >> conduit.toml |
||||
RUN echo "allow_encryption = true" >> conduit.toml |
||||
RUN echo "allow_registration = true" >> conduit.toml |
||||
RUN echo "log = \"warn,_=off,sled=off\"" >> conduit.toml |
||||
RUN sed -i "s/address = \"127.0.0.1\"/address = \"0.0.0.0\"/g" conduit.toml |
||||
|
||||
COPY --from=builder /workdir/target/release/conduit /workdir/conduit |
||||
RUN chmod +x /workdir/conduit |
||||
|
||||
EXPOSE 8008 8448 |
||||
|
||||
CMD uname -a && \ |
||||
sed -i "s/#server_name = \"your.server.name\"/server_name = \"${SERVER_NAME}\"/g" conduit.toml && \ |
||||
sed -i "s/your.server.name/${SERVER_NAME}/g" caddy.json && \ |
||||
caddy start --config caddy.json > /dev/null && \ |
||||
/workdir/conduit |
||||
@ -0,0 +1,13 @@
|
||||
# Running Conduit on Complement |
||||
|
||||
This assumes that you're familiar with complement, if not, please readme |
||||
[their readme](https://github.com/matrix-org/complement#running). |
||||
|
||||
Complement works with "base images", this directory (and Dockerfile) helps build the conduit complement-ready docker |
||||
image. |
||||
|
||||
To build, `cd` to the base directory of the workspace, and run this: |
||||
|
||||
`docker build -t complement-conduit:dev -f complement/Dockerfile .` |
||||
|
||||
Then use `complement-conduit:dev` as a base image for running complement tests. |
||||
@ -0,0 +1,72 @@
|
||||
{ |
||||
"logging": { |
||||
"logs": { |
||||
"default": { |
||||
"level": "WARN" |
||||
} |
||||
} |
||||
}, |
||||
"apps": { |
||||
"http": { |
||||
"https_port": 8448, |
||||
"servers": { |
||||
"srv0": { |
||||
"listen": [":8448"], |
||||
"routes": [{ |
||||
"match": [{ |
||||
"host": ["your.server.name"] |
||||
}], |
||||
"handle": [{ |
||||
"handler": "subroute", |
||||
"routes": [{ |
||||
"handle": [{ |
||||
"handler": "reverse_proxy", |
||||
"upstreams": [{ |
||||
"dial": "127.0.0.1:8008" |
||||
}] |
||||
}] |
||||
}] |
||||
}], |
||||
"terminal": true |
||||
}], |
||||
"tls_connection_policies": [{ |
||||
"match": { |
||||
"sni": ["your.server.name"] |
||||
} |
||||
}] |
||||
} |
||||
} |
||||
}, |
||||
"pki": { |
||||
"certificate_authorities": { |
||||
"local": { |
||||
"name": "Complement CA", |
||||
"root": { |
||||
"certificate": "/complement/ca/ca.crt", |
||||
"private_key": "/complement/ca/ca.key" |
||||
}, |
||||
"intermediate": { |
||||
"certificate": "/complement/ca/ca.crt", |
||||
"private_key": "/complement/ca/ca.key" |
||||
} |
||||
} |
||||
} |
||||
}, |
||||
"tls": { |
||||
"automation": { |
||||
"policies": [{ |
||||
"subjects": ["your.server.name"], |
||||
"issuers": [{ |
||||
"module": "internal" |
||||
}], |
||||
"on_demand": true |
||||
}, { |
||||
"issuers": [{ |
||||
"module": "internal", |
||||
"ca": "local" |
||||
}] |
||||
}] |
||||
} |
||||
} |
||||
} |
||||
} |
||||
@ -1,28 +1,36 @@
|
||||
Conduit for Debian |
||||
================== |
||||
|
||||
Installation |
||||
------------ |
||||
|
||||
Information about downloading, building and deploying the Debian package, see |
||||
the "Installing Conduit" section in [DEPLOY.md](../DEPLOY.md). |
||||
All following sections until "Setting up the Reverse Proxy" be ignored because |
||||
this is handled automatically by the packaging. |
||||
|
||||
Configuration |
||||
------------- |
||||
|
||||
When installed, Debconf generates the configuration of the homeserver |
||||
(host)name, the address and port it listens on. This configuration ends up in |
||||
/etc/matrix-conduit/conduit.toml. |
||||
`/etc/matrix-conduit/conduit.toml`. |
||||
|
||||
You can tweak more detailed settings by uncommenting and setting the variables |
||||
in /etc/matrix-conduit/conduit.toml. This involves settings such as the maximum |
||||
in `/etc/matrix-conduit/conduit.toml`. This involves settings such as the maximum |
||||
file size for download/upload, enabling federation, etc. |
||||
|
||||
Running |
||||
------- |
||||
|
||||
The package uses the matrix-conduit.service systemd unit file to start and |
||||
The package uses the `matrix-conduit.service` systemd unit file to start and |
||||
stop Conduit. It loads the configuration file mentioned above to set up the |
||||
environment before running the server. |
||||
|
||||
This package assumes by default that Conduit will be placed behind a reverse |
||||
proxy such as Apache or nginx. This default deployment entails just listening |
||||
on 127.0.0.1 and the free port 6167 and is reachable via a client using the URL |
||||
http://localhost:6167. |
||||
on `127.0.0.1` and the free port `6167` and is reachable via a client using the URL |
||||
<http://localhost:6167>. |
||||
|
||||
At a later stage this packaging may support also setting up TLS and running |
||||
stand-alone. In this case, however, you need to set up some certificates and |
||||
@ -0,0 +1,146 @@
|
||||
use ruma::api::client::relations::{ |
||||
get_relating_events, get_relating_events_with_rel_type, |
||||
get_relating_events_with_rel_type_and_event_type, |
||||
}; |
||||
|
||||
use crate::{service::rooms::timeline::PduCount, services, Result, Ruma}; |
||||
|
||||
/// # `GET /_matrix/client/r0/rooms/{roomId}/relations/{eventId}/{relType}/{eventType}`
|
||||
pub async fn get_relating_events_with_rel_type_and_event_type_route( |
||||
body: Ruma<get_relating_events_with_rel_type_and_event_type::v1::Request>, |
||||
) -> Result<get_relating_events_with_rel_type_and_event_type::v1::Response> { |
||||
let sender_user = body.sender_user.as_ref().expect("user is authenticated"); |
||||
|
||||
let from = match body.from.clone() { |
||||
Some(from) => PduCount::try_from_string(&from)?, |
||||
None => match ruma::api::Direction::Backward { |
||||
// TODO: fix ruma so `body.dir` exists
|
||||
ruma::api::Direction::Forward => PduCount::min(), |
||||
ruma::api::Direction::Backward => PduCount::max(), |
||||
}, |
||||
}; |
||||
|
||||
let to = body |
||||
.to |
||||
.as_ref() |
||||
.and_then(|t| PduCount::try_from_string(&t).ok()); |
||||
|
||||
// Use limit or else 10, with maximum 100
|
||||
let limit = body |
||||
.limit |
||||
.and_then(|u| u32::try_from(u).ok()) |
||||
.map_or(10_usize, |u| u as usize) |
||||
.min(100); |
||||
|
||||
let res = services() |
||||
.rooms |
||||
.pdu_metadata |
||||
.paginate_relations_with_filter( |
||||
sender_user, |
||||
&body.room_id, |
||||
&body.event_id, |
||||
Some(body.event_type.clone()), |
||||
Some(body.rel_type.clone()), |
||||
from, |
||||
to, |
||||
limit, |
||||
)?; |
||||
|
||||
Ok( |
||||
get_relating_events_with_rel_type_and_event_type::v1::Response { |
||||
chunk: res.chunk, |
||||
next_batch: res.next_batch, |
||||
prev_batch: res.prev_batch, |
||||
}, |
||||
) |
||||
} |
||||
|
||||
/// # `GET /_matrix/client/r0/rooms/{roomId}/relations/{eventId}/{relType}`
|
||||
pub async fn get_relating_events_with_rel_type_route( |
||||
body: Ruma<get_relating_events_with_rel_type::v1::Request>, |
||||
) -> Result<get_relating_events_with_rel_type::v1::Response> { |
||||
let sender_user = body.sender_user.as_ref().expect("user is authenticated"); |
||||
|
||||
let from = match body.from.clone() { |
||||
Some(from) => PduCount::try_from_string(&from)?, |
||||
None => match ruma::api::Direction::Backward { |
||||
// TODO: fix ruma so `body.dir` exists
|
||||
ruma::api::Direction::Forward => PduCount::min(), |
||||
ruma::api::Direction::Backward => PduCount::max(), |
||||
}, |
||||
}; |
||||
|
||||
let to = body |
||||
.to |
||||
.as_ref() |
||||
.and_then(|t| PduCount::try_from_string(&t).ok()); |
||||
|
||||
// Use limit or else 10, with maximum 100
|
||||
let limit = body |
||||
.limit |
||||
.and_then(|u| u32::try_from(u).ok()) |
||||
.map_or(10_usize, |u| u as usize) |
||||
.min(100); |
||||
|
||||
let res = services() |
||||
.rooms |
||||
.pdu_metadata |
||||
.paginate_relations_with_filter( |
||||
sender_user, |
||||
&body.room_id, |
||||
&body.event_id, |
||||
None, |
||||
Some(body.rel_type.clone()), |
||||
from, |
||||
to, |
||||
limit, |
||||
)?; |
||||
|
||||
Ok(get_relating_events_with_rel_type::v1::Response { |
||||
chunk: res.chunk, |
||||
next_batch: res.next_batch, |
||||
prev_batch: res.prev_batch, |
||||
}) |
||||
} |
||||
|
||||
/// # `GET /_matrix/client/r0/rooms/{roomId}/relations/{eventId}`
|
||||
pub async fn get_relating_events_route( |
||||
body: Ruma<get_relating_events::v1::Request>, |
||||
) -> Result<get_relating_events::v1::Response> { |
||||
let sender_user = body.sender_user.as_ref().expect("user is authenticated"); |
||||
|
||||
let from = match body.from.clone() { |
||||
Some(from) => PduCount::try_from_string(&from)?, |
||||
None => match ruma::api::Direction::Backward { |
||||
// TODO: fix ruma so `body.dir` exists
|
||||
ruma::api::Direction::Forward => PduCount::min(), |
||||
ruma::api::Direction::Backward => PduCount::max(), |
||||
}, |
||||
}; |
||||
|
||||
let to = body |
||||
.to |
||||
.as_ref() |
||||
.and_then(|t| PduCount::try_from_string(&t).ok()); |
||||
|
||||
// Use limit or else 10, with maximum 100
|
||||
let limit = body |
||||
.limit |
||||
.and_then(|u| u32::try_from(u).ok()) |
||||
.map_or(10_usize, |u| u as usize) |
||||
.min(100); |
||||
|
||||
services() |
||||
.rooms |
||||
.pdu_metadata |
||||
.paginate_relations_with_filter( |
||||
sender_user, |
||||
&body.room_id, |
||||
&body.event_id, |
||||
None, |
||||
None, |
||||
from, |
||||
to, |
||||
limit, |
||||
) |
||||
} |
||||
@ -0,0 +1,34 @@
|
||||
use crate::{services, Result, Ruma}; |
||||
use ruma::api::client::space::get_hierarchy; |
||||
|
||||
/// # `GET /_matrix/client/v1/rooms/{room_id}/hierarchy``
|
||||
///
|
||||
/// Paginates over the space tree in a depth-first manner to locate child rooms of a given space.
|
||||
pub async fn get_hierarchy_route( |
||||
body: Ruma<get_hierarchy::v1::Request>, |
||||
) -> Result<get_hierarchy::v1::Response> { |
||||
let sender_user = body.sender_user.as_ref().expect("user is authenticated"); |
||||
|
||||
let skip = body |
||||
.from |
||||
.as_ref() |
||||
.and_then(|s| s.parse::<usize>().ok()) |
||||
.unwrap_or(0); |
||||
|
||||
let limit = body.limit.map_or(10, u64::from).min(100) as usize; |
||||
|
||||
let max_depth = body.max_depth.map_or(3, u64::from).min(10) as usize + 1; // +1 to skip the space room itself
|
||||
|
||||
services() |
||||
.rooms |
||||
.spaces |
||||
.get_hierarchy( |
||||
sender_user, |
||||
&body.room_id, |
||||
limit, |
||||
skip, |
||||
max_depth, |
||||
body.suggested_only, |
||||
) |
||||
.await |
||||
} |
||||
@ -0,0 +1,49 @@
|
||||
use ruma::api::client::{error::ErrorKind, threads::get_threads}; |
||||
|
||||
use crate::{services, Error, Result, Ruma}; |
||||
|
||||
/// # `GET /_matrix/client/r0/rooms/{roomId}/threads`
|
||||
pub async fn get_threads_route( |
||||
body: Ruma<get_threads::v1::Request>, |
||||
) -> Result<get_threads::v1::Response> { |
||||
let sender_user = body.sender_user.as_ref().expect("user is authenticated"); |
||||
|
||||
// Use limit or else 10, with maximum 100
|
||||
let limit = body |
||||
.limit |
||||
.and_then(|l| l.try_into().ok()) |
||||
.unwrap_or(10) |
||||
.min(100); |
||||
|
||||
let from = if let Some(from) = &body.from { |
||||
from.parse() |
||||
.map_err(|_| Error::BadRequest(ErrorKind::InvalidParam, ""))? |
||||
} else { |
||||
u64::MAX |
||||
}; |
||||
|
||||
let threads = services() |
||||
.rooms |
||||
.threads |
||||
.threads_until(sender_user, &body.room_id, from, &body.include)? |
||||
.take(limit) |
||||
.filter_map(|r| r.ok()) |
||||
.filter(|(_, pdu)| { |
||||
services() |
||||
.rooms |
||||
.state_accessor |
||||
.user_can_see_event(sender_user, &body.room_id, &pdu.event_id) |
||||
.unwrap_or(false) |
||||
}) |
||||
.collect::<Vec<_>>(); |
||||
|
||||
let next_batch = threads.last().map(|(count, _)| count.to_string()); |
||||
|
||||
Ok(get_threads::v1::Response { |
||||
chunk: threads |
||||
.into_iter() |
||||
.map(|(_, pdu)| pdu.to_room_event()) |
||||
.collect(), |
||||
next_batch, |
||||
}) |
||||
} |
||||
@ -0,0 +1,78 @@
|
||||
use std::mem; |
||||
|
||||
use ruma::{api::client::threads::get_threads::v1::IncludeThreads, OwnedUserId, RoomId, UserId}; |
||||
|
||||
use crate::{database::KeyValueDatabase, service, services, utils, Error, PduEvent, Result}; |
||||
|
||||
impl service::rooms::threads::Data for KeyValueDatabase { |
||||
fn threads_until<'a>( |
||||
&'a self, |
||||
user_id: &'a UserId, |
||||
room_id: &'a RoomId, |
||||
until: u64, |
||||
include: &'a IncludeThreads, |
||||
) -> Result<Box<dyn Iterator<Item = Result<(u64, PduEvent)>> + 'a>> { |
||||
let prefix = services() |
||||
.rooms |
||||
.short |
||||
.get_shortroomid(room_id)? |
||||
.expect("room exists") |
||||
.to_be_bytes() |
||||
.to_vec(); |
||||
|
||||
let mut current = prefix.clone(); |
||||
current.extend_from_slice(&(until - 1).to_be_bytes()); |
||||
|
||||
Ok(Box::new( |
||||
self.threadid_userids |
||||
.iter_from(¤t, true) |
||||
.take_while(move |(k, _)| k.starts_with(&prefix)) |
||||
.map(move |(pduid, users)| { |
||||
let count = utils::u64_from_bytes(&pduid[(mem::size_of::<u64>())..]) |
||||
.map_err(|_| Error::bad_database("Invalid pduid in threadid_userids."))?; |
||||
let mut pdu = services() |
||||
.rooms |
||||
.timeline |
||||
.get_pdu_from_id(&pduid)? |
||||
.ok_or_else(|| { |
||||
Error::bad_database("Invalid pduid reference in threadid_userids") |
||||
})?; |
||||
if pdu.sender != user_id { |
||||
pdu.remove_transaction_id()?; |
||||
} |
||||
Ok((count, pdu)) |
||||
}), |
||||
)) |
||||
} |
||||
|
||||
fn update_participants(&self, root_id: &[u8], participants: &[OwnedUserId]) -> Result<()> { |
||||
let users = participants |
||||
.iter() |
||||
.map(|user| user.as_bytes()) |
||||
.collect::<Vec<_>>() |
||||
.join(&[0xff][..]); |
||||
|
||||
self.threadid_userids.insert(&root_id, &users)?; |
||||
|
||||
Ok(()) |
||||
} |
||||
|
||||
fn get_participants(&self, root_id: &[u8]) -> Result<Option<Vec<OwnedUserId>>> { |
||||
if let Some(users) = self.threadid_userids.get(&root_id)? { |
||||
Ok(Some( |
||||
users |
||||
.split(|b| *b == 0xff) |
||||
.map(|bytes| { |
||||
UserId::parse(utils::string_from_bytes(bytes).map_err(|_| { |
||||
Error::bad_database("Invalid UserId bytes in threadid_userids.") |
||||
})?) |
||||
.map_err(|_| Error::bad_database("Invalid UserId in threadid_userids.")) |
||||
}) |
||||
.filter_map(|r| r.ok()) |
||||
.collect(), |
||||
)) |
||||
} else { |
||||
Ok(None) |
||||
} |
||||
} |
||||
} |
||||
@ -0,0 +1,505 @@
|
||||
use std::sync::{Arc, Mutex}; |
||||
|
||||
use lru_cache::LruCache; |
||||
use ruma::{ |
||||
api::{ |
||||
client::{ |
||||
error::ErrorKind, |
||||
space::{get_hierarchy, SpaceHierarchyRoomsChunk}, |
||||
}, |
||||
federation, |
||||
}, |
||||
events::{ |
||||
room::{ |
||||
avatar::RoomAvatarEventContent, |
||||
canonical_alias::RoomCanonicalAliasEventContent, |
||||
create::RoomCreateEventContent, |
||||
guest_access::{GuestAccess, RoomGuestAccessEventContent}, |
||||
history_visibility::{HistoryVisibility, RoomHistoryVisibilityEventContent}, |
||||
join_rules::{self, AllowRule, JoinRule, RoomJoinRulesEventContent}, |
||||
topic::RoomTopicEventContent, |
||||
}, |
||||
space::child::SpaceChildEventContent, |
||||
StateEventType, |
||||
}, |
||||
space::SpaceRoomJoinRule, |
||||
OwnedRoomId, RoomId, UserId, |
||||
}; |
||||
|
||||
use tracing::{debug, error, warn}; |
||||
|
||||
use crate::{services, Error, PduEvent, Result}; |
||||
|
||||
pub enum CachedJoinRule { |
||||
//Simplified(SpaceRoomJoinRule),
|
||||
Full(JoinRule), |
||||
} |
||||
|
||||
pub struct CachedSpaceChunk { |
||||
chunk: SpaceHierarchyRoomsChunk, |
||||
children: Vec<OwnedRoomId>, |
||||
join_rule: CachedJoinRule, |
||||
} |
||||
|
||||
pub struct Service { |
||||
pub roomid_spacechunk_cache: Mutex<LruCache<OwnedRoomId, Option<CachedSpaceChunk>>>, |
||||
} |
||||
|
||||
impl Service { |
||||
pub async fn get_hierarchy( |
||||
&self, |
||||
sender_user: &UserId, |
||||
room_id: &RoomId, |
||||
limit: usize, |
||||
skip: usize, |
||||
max_depth: usize, |
||||
suggested_only: bool, |
||||
) -> Result<get_hierarchy::v1::Response> { |
||||
let mut left_to_skip = skip; |
||||
|
||||
let mut rooms_in_path = Vec::new(); |
||||
let mut stack = vec![vec![room_id.to_owned()]]; |
||||
let mut results = Vec::new(); |
||||
|
||||
while let Some(current_room) = { |
||||
while stack.last().map_or(false, |s| s.is_empty()) { |
||||
stack.pop(); |
||||
} |
||||
if !stack.is_empty() { |
||||
stack.last_mut().and_then(|s| s.pop()) |
||||
} else { |
||||
None |
||||
} |
||||
} { |
||||
rooms_in_path.push(current_room.clone()); |
||||
if results.len() >= limit { |
||||
break; |
||||
} |
||||
|
||||
if let Some(cached) = self |
||||
.roomid_spacechunk_cache |
||||
.lock() |
||||
.unwrap() |
||||
.get_mut(¤t_room.to_owned()) |
||||
.as_ref() |
||||
{ |
||||
if let Some(cached) = cached { |
||||
let allowed = match &cached.join_rule { |
||||
//CachedJoinRule::Simplified(s) => {
|
||||
//self.handle_simplified_join_rule(s, sender_user, ¤t_room)?
|
||||
//}
|
||||
CachedJoinRule::Full(f) => { |
||||
self.handle_join_rule(f, sender_user, ¤t_room)? |
||||
} |
||||
}; |
||||
if allowed { |
||||
if left_to_skip > 0 { |
||||
left_to_skip -= 1; |
||||
} else { |
||||
results.push(cached.chunk.clone()); |
||||
} |
||||
if rooms_in_path.len() < max_depth { |
||||
stack.push(cached.children.clone()); |
||||
} |
||||
} |
||||
} |
||||
continue; |
||||
} |
||||
|
||||
if let Some(current_shortstatehash) = services() |
||||
.rooms |
||||
.state |
||||
.get_room_shortstatehash(¤t_room)? |
||||
{ |
||||
let state = services() |
||||
.rooms |
||||
.state_accessor |
||||
.state_full_ids(current_shortstatehash) |
||||
.await?; |
||||
|
||||
let mut children_ids = Vec::new(); |
||||
let mut children_pdus = Vec::new(); |
||||
for (key, id) in state { |
||||
let (event_type, state_key) = |
||||
services().rooms.short.get_statekey_from_short(key)?; |
||||
if event_type != StateEventType::SpaceChild { |
||||
continue; |
||||
} |
||||
|
||||
let pdu = services() |
||||
.rooms |
||||
.timeline |
||||
.get_pdu(&id)? |
||||
.ok_or_else(|| Error::bad_database("Event in space state not found"))?; |
||||
|
||||
if serde_json::from_str::<SpaceChildEventContent>(pdu.content.get()) |
||||
.ok() |
||||
.and_then(|c| c.via) |
||||
.map_or(true, |v| v.is_empty()) |
||||
{ |
||||
continue; |
||||
} |
||||
|
||||
if let Ok(room_id) = OwnedRoomId::try_from(state_key) { |
||||
children_ids.push(room_id); |
||||
children_pdus.push(pdu); |
||||
} |
||||
} |
||||
|
||||
// TODO: Sort children
|
||||
children_ids.reverse(); |
||||
|
||||
let chunk = self.get_room_chunk(sender_user, ¤t_room, children_pdus); |
||||
if let Ok(chunk) = chunk { |
||||
if left_to_skip > 0 { |
||||
left_to_skip -= 1; |
||||
} else { |
||||
results.push(chunk.clone()); |
||||
} |
||||
let join_rule = services() |
||||
.rooms |
||||
.state_accessor |
||||
.room_state_get(¤t_room, &StateEventType::RoomJoinRules, "")? |
||||
.map(|s| { |
||||
serde_json::from_str(s.content.get()) |
||||
.map(|c: RoomJoinRulesEventContent| c.join_rule) |
||||
.map_err(|e| { |
||||
error!("Invalid room join rule event in database: {}", e); |
||||
Error::BadDatabase("Invalid room join rule event in database.") |
||||
}) |
||||
}) |
||||
.transpose()? |
||||
.unwrap_or(JoinRule::Invite); |
||||
|
||||
self.roomid_spacechunk_cache.lock().unwrap().insert( |
||||
current_room.clone(), |
||||
Some(CachedSpaceChunk { |
||||
chunk, |
||||
children: children_ids.clone(), |
||||
join_rule: CachedJoinRule::Full(join_rule), |
||||
}), |
||||
); |
||||
} |
||||
|
||||
if rooms_in_path.len() < max_depth { |
||||
stack.push(children_ids); |
||||
} |
||||
} else { |
||||
let server = current_room.server_name(); |
||||
if server == services().globals.server_name() { |
||||
continue; |
||||
} |
||||
if !results.is_empty() { |
||||
// Early return so the client can see some data already
|
||||
break; |
||||
} |
||||
warn!("Asking {server} for /hierarchy"); |
||||
if let Ok(response) = services() |
||||
.sending |
||||
.send_federation_request( |
||||
&server, |
||||
federation::space::get_hierarchy::v1::Request { |
||||
room_id: current_room.to_owned(), |
||||
suggested_only, |
||||
}, |
||||
) |
||||
.await |
||||
{ |
||||
warn!("Got response from {server} for /hierarchy\n{response:?}"); |
||||
let chunk = SpaceHierarchyRoomsChunk { |
||||
canonical_alias: response.room.canonical_alias, |
||||
name: response.room.name, |
||||
num_joined_members: response.room.num_joined_members, |
||||
room_id: response.room.room_id, |
||||
topic: response.room.topic, |
||||
world_readable: response.room.world_readable, |
||||
guest_can_join: response.room.guest_can_join, |
||||
avatar_url: response.room.avatar_url, |
||||
join_rule: response.room.join_rule.clone(), |
||||
room_type: response.room.room_type, |
||||
children_state: response.room.children_state, |
||||
}; |
||||
let children = response |
||||
.children |
||||
.iter() |
||||
.map(|c| c.room_id.clone()) |
||||
.collect::<Vec<_>>(); |
||||
|
||||
let join_rule = match response.room.join_rule { |
||||
SpaceRoomJoinRule::Invite => JoinRule::Invite, |
||||
SpaceRoomJoinRule::Knock => JoinRule::Knock, |
||||
SpaceRoomJoinRule::Private => JoinRule::Private, |
||||
SpaceRoomJoinRule::Restricted => { |
||||
JoinRule::Restricted(join_rules::Restricted { |
||||
allow: response |
||||
.room |
||||
.allowed_room_ids |
||||
.into_iter() |
||||
.map(|room| AllowRule::room_membership(room)) |
||||
.collect(), |
||||
}) |
||||
} |
||||
SpaceRoomJoinRule::KnockRestricted => { |
||||
JoinRule::KnockRestricted(join_rules::Restricted { |
||||
allow: response |
||||
.room |
||||
.allowed_room_ids |
||||
.into_iter() |
||||
.map(|room| AllowRule::room_membership(room)) |
||||
.collect(), |
||||
}) |
||||
} |
||||
SpaceRoomJoinRule::Public => JoinRule::Public, |
||||
_ => return Err(Error::BadServerResponse("Unknown join rule")), |
||||
}; |
||||
if self.handle_join_rule(&join_rule, sender_user, ¤t_room)? { |
||||
if left_to_skip > 0 { |
||||
left_to_skip -= 1; |
||||
} else { |
||||
results.push(chunk.clone()); |
||||
} |
||||
if rooms_in_path.len() < max_depth { |
||||
stack.push(children.clone()); |
||||
} |
||||
} |
||||
|
||||
self.roomid_spacechunk_cache.lock().unwrap().insert( |
||||
current_room.clone(), |
||||
Some(CachedSpaceChunk { |
||||
chunk, |
||||
children, |
||||
join_rule: CachedJoinRule::Full(join_rule), |
||||
}), |
||||
); |
||||
|
||||
/* TODO:
|
||||
for child in response.children { |
||||
roomid_spacechunk_cache.insert( |
||||
current_room.clone(), |
||||
CachedSpaceChunk { |
||||
chunk: child.chunk, |
||||
children, |
||||
join_rule, |
||||
}, |
||||
); |
||||
} |
||||
*/ |
||||
} else { |
||||
self.roomid_spacechunk_cache |
||||
.lock() |
||||
.unwrap() |
||||
.insert(current_room.clone(), None); |
||||
} |
||||
} |
||||
} |
||||
|
||||
Ok(get_hierarchy::v1::Response { |
||||
next_batch: if results.is_empty() { |
||||
None |
||||
} else { |
||||
Some((skip + results.len()).to_string()) |
||||
}, |
||||
rooms: results, |
||||
}) |
||||
} |
||||
|
||||
fn get_room_chunk( |
||||
&self, |
||||
sender_user: &UserId, |
||||
room_id: &RoomId, |
||||
children: Vec<Arc<PduEvent>>, |
||||
) -> Result<SpaceHierarchyRoomsChunk> { |
||||
Ok(SpaceHierarchyRoomsChunk { |
||||
canonical_alias: services() |
||||
.rooms |
||||
.state_accessor |
||||
.room_state_get(&room_id, &StateEventType::RoomCanonicalAlias, "")? |
||||
.map_or(Ok(None), |s| { |
||||
serde_json::from_str(s.content.get()) |
||||
.map(|c: RoomCanonicalAliasEventContent| c.alias) |
||||
.map_err(|_| { |
||||
Error::bad_database("Invalid canonical alias event in database.") |
||||
}) |
||||
})?, |
||||
name: services().rooms.state_accessor.get_name(&room_id)?, |
||||
num_joined_members: services() |
||||
.rooms |
||||
.state_cache |
||||
.room_joined_count(&room_id)? |
||||
.unwrap_or_else(|| { |
||||
warn!("Room {} has no member count", room_id); |
||||
0 |
||||
}) |
||||
.try_into() |
||||
.expect("user count should not be that big"), |
||||
room_id: room_id.to_owned(), |
||||
topic: services() |
||||
.rooms |
||||
.state_accessor |
||||
.room_state_get(&room_id, &StateEventType::RoomTopic, "")? |
||||
.map_or(Ok(None), |s| { |
||||
serde_json::from_str(s.content.get()) |
||||
.map(|c: RoomTopicEventContent| Some(c.topic)) |
||||
.map_err(|_| { |
||||
error!("Invalid room topic event in database for room {}", room_id); |
||||
Error::bad_database("Invalid room topic event in database.") |
||||
}) |
||||
})?, |
||||
world_readable: services() |
||||
.rooms |
||||
.state_accessor |
||||
.room_state_get(&room_id, &StateEventType::RoomHistoryVisibility, "")? |
||||
.map_or(Ok(false), |s| { |
||||
serde_json::from_str(s.content.get()) |
||||
.map(|c: RoomHistoryVisibilityEventContent| { |
||||
c.history_visibility == HistoryVisibility::WorldReadable |
||||
}) |
||||
.map_err(|_| { |
||||
Error::bad_database( |
||||
"Invalid room history visibility event in database.", |
||||
) |
||||
}) |
||||
})?, |
||||
guest_can_join: services() |
||||
.rooms |
||||
.state_accessor |
||||
.room_state_get(&room_id, &StateEventType::RoomGuestAccess, "")? |
||||
.map_or(Ok(false), |s| { |
||||
serde_json::from_str(s.content.get()) |
||||
.map(|c: RoomGuestAccessEventContent| { |
||||
c.guest_access == GuestAccess::CanJoin |
||||
}) |
||||
.map_err(|_| { |
||||
Error::bad_database("Invalid room guest access event in database.") |
||||
}) |
||||
})?, |
||||
avatar_url: services() |
||||
.rooms |
||||
.state_accessor |
||||
.room_state_get(&room_id, &StateEventType::RoomAvatar, "")? |
||||
.map(|s| { |
||||
serde_json::from_str(s.content.get()) |
||||
.map(|c: RoomAvatarEventContent| c.url) |
||||
.map_err(|_| Error::bad_database("Invalid room avatar event in database.")) |
||||
}) |
||||
.transpose()? |
||||
// url is now an Option<String> so we must flatten
|
||||
.flatten(), |
||||
join_rule: { |
||||
let join_rule = services() |
||||
.rooms |
||||
.state_accessor |
||||
.room_state_get(&room_id, &StateEventType::RoomJoinRules, "")? |
||||
.map(|s| { |
||||
serde_json::from_str(s.content.get()) |
||||
.map(|c: RoomJoinRulesEventContent| c.join_rule) |
||||
.map_err(|e| { |
||||
error!("Invalid room join rule event in database: {}", e); |
||||
Error::BadDatabase("Invalid room join rule event in database.") |
||||
}) |
||||
}) |
||||
.transpose()? |
||||
.unwrap_or(JoinRule::Invite); |
||||
|
||||
if !self.handle_join_rule(&join_rule, sender_user, room_id)? { |
||||
debug!("User is not allowed to see room {room_id}"); |
||||
// This error will be caught later
|
||||
return Err(Error::BadRequest( |
||||
ErrorKind::Forbidden, |
||||
"User is not allowed to see the room", |
||||
)); |
||||
} |
||||
|
||||
self.translate_joinrule(&join_rule)? |
||||
}, |
||||
room_type: services() |
||||
.rooms |
||||
.state_accessor |
||||
.room_state_get(&room_id, &StateEventType::RoomCreate, "")? |
||||
.map(|s| { |
||||
serde_json::from_str::<RoomCreateEventContent>(s.content.get()).map_err(|e| { |
||||
error!("Invalid room create event in database: {}", e); |
||||
Error::BadDatabase("Invalid room create event in database.") |
||||
}) |
||||
}) |
||||
.transpose()? |
||||
.and_then(|e| e.room_type), |
||||
children_state: children |
||||
.into_iter() |
||||
.map(|pdu| pdu.to_stripped_spacechild_state_event()) |
||||
.collect(), |
||||
}) |
||||
} |
||||
|
||||
fn translate_joinrule(&self, join_rule: &JoinRule) -> Result<SpaceRoomJoinRule> { |
||||
match join_rule { |
||||
JoinRule::Invite => Ok(SpaceRoomJoinRule::Invite), |
||||
JoinRule::Knock => Ok(SpaceRoomJoinRule::Knock), |
||||
JoinRule::Private => Ok(SpaceRoomJoinRule::Private), |
||||
JoinRule::Restricted(_) => Ok(SpaceRoomJoinRule::Restricted), |
||||
JoinRule::KnockRestricted(_) => Ok(SpaceRoomJoinRule::KnockRestricted), |
||||
JoinRule::Public => Ok(SpaceRoomJoinRule::Public), |
||||
_ => Err(Error::BadServerResponse("Unknown join rule")), |
||||
} |
||||
} |
||||
|
||||
fn handle_simplified_join_rule( |
||||
&self, |
||||
join_rule: &SpaceRoomJoinRule, |
||||
sender_user: &UserId, |
||||
room_id: &RoomId, |
||||
) -> Result<bool> { |
||||
let allowed = match join_rule { |
||||
SpaceRoomJoinRule::Public => true, |
||||
SpaceRoomJoinRule::Knock => true, |
||||
SpaceRoomJoinRule::Invite => services() |
||||
.rooms |
||||
.state_cache |
||||
.is_joined(sender_user, &room_id)?, |
||||
_ => false, |
||||
}; |
||||
|
||||
Ok(allowed) |
||||
} |
||||
|
||||
fn handle_join_rule( |
||||
&self, |
||||
join_rule: &JoinRule, |
||||
sender_user: &UserId, |
||||
room_id: &RoomId, |
||||
) -> Result<bool> { |
||||
if self.handle_simplified_join_rule( |
||||
&self.translate_joinrule(join_rule)?, |
||||
sender_user, |
||||
room_id, |
||||
)? { |
||||
return Ok(true); |
||||
} |
||||
|
||||
match join_rule { |
||||
JoinRule::Restricted(r) => { |
||||
for rule in &r.allow { |
||||
match rule { |
||||
join_rules::AllowRule::RoomMembership(rm) => { |
||||
if let Ok(true) = services() |
||||
.rooms |
||||
.state_cache |
||||
.is_joined(sender_user, &rm.room_id) |
||||
{ |
||||
return Ok(true); |
||||
} |
||||
} |
||||
_ => {} |
||||
} |
||||
} |
||||
|
||||
Ok(false) |
||||
} |
||||
JoinRule::KnockRestricted(_) => { |
||||
// TODO: Check rules
|
||||
Ok(false) |
||||
} |
||||
_ => Ok(false), |
||||
} |
||||
} |
||||
} |
||||
@ -0,0 +1,15 @@
|
||||
use crate::{PduEvent, Result}; |
||||
use ruma::{api::client::threads::get_threads::v1::IncludeThreads, OwnedUserId, RoomId, UserId}; |
||||
|
||||
pub trait Data: Send + Sync { |
||||
fn threads_until<'a>( |
||||
&'a self, |
||||
user_id: &'a UserId, |
||||
room_id: &'a RoomId, |
||||
until: u64, |
||||
include: &'a IncludeThreads, |
||||
) -> Result<Box<dyn Iterator<Item = Result<(u64, PduEvent)>> + 'a>>; |
||||
|
||||
fn update_participants(&self, root_id: &[u8], participants: &[OwnedUserId]) -> Result<()>; |
||||
fn get_participants(&self, root_id: &[u8]) -> Result<Option<Vec<OwnedUserId>>>; |
||||
} |
||||
@ -0,0 +1,116 @@
|
||||
mod data; |
||||
|
||||
pub use data::Data; |
||||
use ruma::{ |
||||
api::client::{error::ErrorKind, threads::get_threads::v1::IncludeThreads}, |
||||
events::relation::BundledThread, |
||||
uint, CanonicalJsonValue, EventId, RoomId, UserId, |
||||
}; |
||||
|
||||
use serde_json::json; |
||||
|
||||
use crate::{services, Error, PduEvent, Result}; |
||||
|
||||
pub struct Service { |
||||
pub db: &'static dyn Data, |
||||
} |
||||
|
||||
impl Service { |
||||
pub fn threads_until<'a>( |
||||
&'a self, |
||||
user_id: &'a UserId, |
||||
room_id: &'a RoomId, |
||||
until: u64, |
||||
include: &'a IncludeThreads, |
||||
) -> Result<impl Iterator<Item = Result<(u64, PduEvent)>> + 'a> { |
||||
self.db.threads_until(user_id, room_id, until, include) |
||||
} |
||||
|
||||
pub fn add_to_thread<'a>(&'a self, root_event_id: &EventId, pdu: &PduEvent) -> Result<()> { |
||||
let root_id = &services() |
||||
.rooms |
||||
.timeline |
||||
.get_pdu_id(root_event_id)? |
||||
.ok_or_else(|| { |
||||
Error::BadRequest( |
||||
ErrorKind::InvalidParam, |
||||
"Invalid event id in thread message", |
||||
) |
||||
})?; |
||||
|
||||
let root_pdu = services() |
||||
.rooms |
||||
.timeline |
||||
.get_pdu_from_id(root_id)? |
||||
.ok_or_else(|| { |
||||
Error::BadRequest(ErrorKind::InvalidParam, "Thread root pdu not found") |
||||
})?; |
||||
|
||||
let mut root_pdu_json = services() |
||||
.rooms |
||||
.timeline |
||||
.get_pdu_json_from_id(root_id)? |
||||
.ok_or_else(|| { |
||||
Error::BadRequest(ErrorKind::InvalidParam, "Thread root pdu not found") |
||||
})?; |
||||
|
||||
if let CanonicalJsonValue::Object(unsigned) = root_pdu_json |
||||
.entry("unsigned".to_owned()) |
||||
.or_insert_with(|| CanonicalJsonValue::Object(Default::default())) |
||||
{ |
||||
if let Some(mut relations) = unsigned |
||||
.get("m.relations") |
||||
.and_then(|r| r.as_object()) |
||||
.and_then(|r| r.get("m.thread")) |
||||
.and_then(|relations| { |
||||
serde_json::from_value::<BundledThread>(relations.clone().into()).ok() |
||||
}) |
||||
{ |
||||
// Thread already existed
|
||||
relations.count += uint!(1); |
||||
relations.latest_event = pdu.to_message_like_event(); |
||||
|
||||
let content = serde_json::to_value(relations).expect("to_value always works"); |
||||
|
||||
unsigned.insert( |
||||
"m.relations".to_owned(), |
||||
json!({ "m.thread": content }) |
||||
.try_into() |
||||
.expect("thread is valid json"), |
||||
); |
||||
} else { |
||||
// New thread
|
||||
let relations = BundledThread { |
||||
latest_event: pdu.to_message_like_event(), |
||||
count: uint!(1), |
||||
current_user_participated: true, |
||||
}; |
||||
|
||||
let content = serde_json::to_value(relations).expect("to_value always works"); |
||||
|
||||
unsigned.insert( |
||||
"m.relations".to_owned(), |
||||
json!({ "m.thread": content }) |
||||
.try_into() |
||||
.expect("thread is valid json"), |
||||
); |
||||
} |
||||
|
||||
services() |
||||
.rooms |
||||
.timeline |
||||
.replace_pdu(root_id, &root_pdu_json, &root_pdu)?; |
||||
} |
||||
|
||||
let mut users = Vec::new(); |
||||
if let Some(userids) = self.db.get_participants(&root_id)? { |
||||
users.extend_from_slice(&userids); |
||||
users.push(pdu.sender.clone()); |
||||
} else { |
||||
users.push(root_pdu.sender); |
||||
users.push(pdu.sender.clone()); |
||||
} |
||||
|
||||
self.db.update_participants(root_id, &users) |
||||
} |
||||
} |
||||
@ -1,48 +0,0 @@
|
||||
# For use in our CI only. This requires a build artifact created by a previous run pipline stage to be placed in cached_target/release/conduit |
||||
FROM valkum/docker-rust-ci:latest as builder |
||||
WORKDIR /workdir |
||||
|
||||
ARG RUSTC_WRAPPER |
||||
ARG AWS_ACCESS_KEY_ID |
||||
ARG AWS_SECRET_ACCESS_KEY |
||||
ARG SCCACHE_BUCKET |
||||
ARG SCCACHE_ENDPOINT |
||||
ARG SCCACHE_S3_USE_SSL |
||||
|
||||
COPY . . |
||||
RUN mkdir -p target/release |
||||
RUN test -e cached_target/release/conduit && cp cached_target/release/conduit target/release/conduit || cargo build --release |
||||
|
||||
|
||||
FROM valkum/docker-rust-ci:latest |
||||
WORKDIR /workdir |
||||
|
||||
RUN curl -OL "https://github.com/caddyserver/caddy/releases/download/v2.2.1/caddy_2.2.1_linux_amd64.tar.gz" |
||||
RUN tar xzf caddy_2.2.1_linux_amd64.tar.gz |
||||
|
||||
COPY cached_target/release/conduit /workdir/conduit |
||||
RUN chmod +x /workdir/conduit |
||||
RUN chmod +x /workdir/caddy |
||||
|
||||
COPY conduit-example.toml conduit.toml |
||||
|
||||
ENV SERVER_NAME=localhost |
||||
ENV CONDUIT_CONFIG=/workdir/conduit.toml |
||||
|
||||
RUN sed -i "s/port = 6167/port = 8008/g" conduit.toml |
||||
RUN echo "allow_federation = true" >> conduit.toml |
||||
RUN echo "allow_encryption = true" >> conduit.toml |
||||
RUN echo "allow_registration = true" >> conduit.toml |
||||
RUN echo "log = \"warn,_=off,sled=off\"" >> conduit.toml |
||||
RUN sed -i "s/address = \"127.0.0.1\"/address = \"0.0.0.0\"/g" conduit.toml |
||||
|
||||
# Enabled Caddy auto cert generation for complement provided CA. |
||||
RUN echo '{"logging":{"logs":{"default":{"level":"WARN"}}}, "apps":{"http":{"https_port":8448,"servers":{"srv0":{"listen":[":8448"],"routes":[{"match":[{"host":["your.server.name"]}],"handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"reverse_proxy","upstreams":[{"dial":"127.0.0.1:8008"}]}]}]}],"terminal":true}],"tls_connection_policies": [{"match": {"sni": ["your.server.name"]}}]}}},"pki": {"certificate_authorities": {"local": {"name": "Complement CA","root": {"certificate": "/ca/ca.crt","private_key": "/ca/ca.key"},"intermediate": {"certificate": "/ca/ca.crt","private_key": "/ca/ca.key"}}}},"tls":{"automation":{"policies":[{"subjects":["your.server.name"],"issuer":{"module":"internal"},"on_demand":true},{"issuer":{"module":"internal", "ca": "local"}}]}}}}' > caddy.json |
||||
|
||||
EXPOSE 8008 8448 |
||||
|
||||
CMD ([ -z "${COMPLEMENT_CA}" ] && echo "Error: Need Complement PKI support" && true) || \ |
||||
sed -i "s/#server_name = \"your.server.name\"/server_name = \"${SERVER_NAME}\"/g" conduit.toml && \ |
||||
sed -i "s/your.server.name/${SERVER_NAME}/g" caddy.json && \ |
||||
/workdir/caddy start --config caddy.json > /dev/null && \ |
||||
/workdir/conduit |
||||
Loading…
Reference in new issue