drdaeman

joined 1 year ago
[–] drdaeman@lemmy.zhukov.al 2 points 1 year ago (1 children)

It's very hard to say anything definitive, because many of those can generate different load depending on how much traffic/activity it gets (and how it correlates with other service usage at the same time). Could be from minimal load (all services for personal use, so single user, low traffic) to very busy system (family and friends instance, high traffic) and hardware requirement estimates would change accordingly.

As you already have a machine - just put them all there and monitor resource utilization. If it fits - it fits, if it doesn't - you'll need to replace (if you're CPU-bound, I believe CPUs are not upgradeable on those?) or upgrade (if you're RAM-bound) your NUC. You won't have to reinstall them twice anyway.

[–] drdaeman@lemmy.zhukov.al 14 points 1 year ago (1 children)

The fundamental issue is not that emoji XSS (that's just a vector), but how JWTs are implemented and [not] secured. I've read that it was reported at least this January (https://akkoma.nrd.li/notice/AXXhAVF7N5ZH1V972W).

So, developers were already aware, yet - as I'm checking 0.18.1 - they have not fixed the unsafe-inline and unsafe-eval CSP, haven't made jwt cookie HttpOnly, and haven't done anything about exp and jti in the JWTs. I hope the recent events will make them do to so, and not just patch this particular XSS.

[–] drdaeman@lemmy.zhukov.al 1 points 1 year ago

So you can comment, vote and save without jumping extra hoops (because you can only do this from your home instance)

[–] drdaeman@lemmy.zhukov.al 1 points 1 year ago* (last edited 1 year ago)

I'm not sure you understand me. What I wanted to say is that if APIs and protocols would be copyrightable, and SCO and Oracle would've won their respective lawsuits, the world would look so different (in regards to Free and Open Source Software) I'm not completely sure if Fediverse would be a thing.

[–] drdaeman@lemmy.zhukov.al 1 points 1 year ago (1 children)

This means Lemmy container is up and running, but there is some error on the backend that prevents it from functioning correctly. A pretty wild guess, but it's probably something with the database.

docker compose logs may tell a bit more about what's going on. Check out this page https://join-lemmy.org/docs/administration/troubleshooting.html (just remember to replace docker-compose with docker compose - again, I specifically recommend to uninstall docker-compose so it won't accidentally mess things up).

If it's not something obvious, one thing you may try is tear everything down (docker compose down -v), change lemmy:latest to lemmy:0.18.1 in your Compose file, and try starting again. This will use explicit version number and it may help if the latest tag is not something we expect it to be. E.g. I had issues spinning up clean 0.17.4 - it had a bug in DB migrations that was supposed to be fixed in 0.18.

[–] drdaeman@lemmy.zhukov.al 2 points 1 year ago

This means your docker-compose is very outdated or even broken.

To give you some context, originally Docker Compose was a separate project. It's a separate program called docker-compose. It evolved for quite a while, and was eventually rewritten and included as a part of Docker itself, becaming a sub-command (docker compose).

Just uninstall the old Compose (so it won't cause you any issues) and keep only Docker.

[–] drdaeman@lemmy.zhukov.al 1 points 1 year ago

It's OK. I do hacks like this all the time - no shame in this. However, when sharing a recipe with others it's best to promote better practices :)

[–] drdaeman@lemmy.zhukov.al 1 points 1 year ago

Yes - same as with the original script, upgrades would require more manual steps than just updating the version in the Compose file. This is how it's typically done.

There are ways to automate this. Docker Hub used to have a feature for automatic rebuilds when base images had changed, but AFAIK this feature was removed some years ago. Now it's a matter of setting up a CI with periodically (nightly or weekly) scheduled pipelines, but it's not a trivial matter.

Semi-automation can be achieved by using build-time arguments. I'm at my computer now, so here's a revised process:

First, a bunch of manual commands that would allow us to write a patch. I'll use those crude sed statements - just because they work today, but YMMV.

docker run -it --rm --user root tootsuite/mastodon bash
cp -r /opt/mastodon /opt/mastodon.vanilla
sed -i 's/500/1000/g' /opt/mastodon/app/javascript/mastodon/features/compose/components/compose_form.js
sed -i 's/500/1000/g' /opt/mastodon/app/validators/status_length_validator.rb
diff -urN /opt/mastodon.vanilla /opt/mastodon

This will produce a nice patch like this that you can copy. Create an empty directory and save as change-limits.patch in there:

diff -urN /opt/mastodon.vanilla/app/javascript/mastodon/features/compose/components/compose_form.js /opt/mastodon/app/javascript/mastodon/features/compose/components/compose_form.js
***
/opt/mastodon.vanilla/app/javascript/mastodon/features/compose/components/compose_form.js   2023-07-07 17:50:26.046682458 +0000
+++ /opt/mastodon/app/javascript/mastodon/features/compose/components/compose_form.js   2023-07-07 17:50:49.626674917 +0000
@@ -90,7 +90,7 @@
     const fulltext = this.getFulltextForCharacterCounting();
     const isOnlyWhitespace = fulltext.length !== 0 && fulltext.trim().length === 0;

-    return !(isSubmitting || isUploading || isChangingUpload || length(fulltext) > 500 || (isOnlyWhitespace && !anyMedia));
+    return !(isSubmitting || isUploading || isChangingUpload || length(fulltext) > 1000 || (isOnlyWhitespace && !anyMedia));
   };

   handleSubmit = (e) => {
@@ -280,7 +280,7 @@
           </div>

           <div className='character-counter__wrapper'>
-            <CharacterCounter max={500} text={this.getFulltextForCharacterCounting()} />
+            <CharacterCounter max={1000} text={this.getFulltextForCharacterCounting()} />
           </div>
         </div>

diff -urN /opt/mastodon.vanilla/app/validators/status_length_validator.rb /opt/mastodon/app/validators/status_length_validator.rb
***
/opt/mastodon.vanilla/app/validators/status_length_validator.rb     2023-07-07 17:50:26.106682438 +0000
+++ /opt/mastodon/app/validators/status_length_validator.rb     2023-07-07 17:51:00.796671166 +0000
@@ -1,7 +1,7 @@
 # frozen_string_literal: true

 class StatusLengthValidator < ActiveModel::Validator
-  MAX_CHARS = 500
+  MAX_CHARS = 1000
   URL_PLACEHOLDER_CHARS = 23
   URL_PLACEHOLDER = 'x' * 23

Now, put the Dockerfile next to the patch:

ARG MASTODON_VERSION=latest
FROM tootsuite/mastodon:${MASTODON_VERSION}

USER root
RUN apt-get update && apt-get install -y --no-install-recommends patch
COPY change-limits.patch ./
RUN patch -up3 < change-limits.patch

USER mastodon
RUN OTP_SECRET=precompile_placeholder SECRET_KEY_BASE=precompile_placeholder rails assets:precompile

And a shell script to semi-automate the upgrades. Note it requires curl and jq to be available to parse JSON.

#!/bin/sh

set -e

MASTODON_VERSION="$(curl -s "https://hub.docker.com/v2/namespaces/tootsuite/repositories/mastodon/tags?page_size=100" | jq -r '.results[]["name"]' | sort -rV | head -n 1)"
echo "Latest version: ${MASTODON_VERSION}"
docker pull "tootsuite/mastodon:${MASTODON_VERSION}"
docker build --build-arg "MASTODON_VERSION=${MASTODON_VERSION}" -t "my-mastodon:${MASTODON_VERSION}" .

And finally, create a file called .dockerignore that contains only one line that would say build.sh. That's just minor cosmetic touch saying that our build.sh is not meant to be a part of the build context. If everything is correct, there should be now 4 files in the directory: .dockerignore, build.sh, change-limits.patch and Dockerfile.

Now when you run build.sh it will automatically find the latest version and build you a custom image tagged as e.g. my-mastodon:v4.1.3, which you can use in your Compose file. For a distributed system like Docker Swarm, Nomad or Kubernetes you'll want to tweak this script a little to use some registry (your-registry.example.org/your/mastodon:v4.1.3) and possibly even apply changes further (e.g. docker service update --image ...).

Mutable tags like latest can become confusing or even problematic (e.g. if you'll want to downgrade it's best to have previous version image readily available for some time - even if the build process is reproducible), so I would really recommend to use explicit version number tags.

Hope this helps.

[–] drdaeman@lemmy.zhukov.al 2 points 1 year ago (5 children)

This looks good to me. I suspect the problem is not with the compose file itself, but in the tool you're invoking - something must be wrong with docker-compose. Try using docker compose up -d instead of docker-compose up -d (requires Docker v20.10.13+).

Posting output of docker-compose version, docker version and docker compose version may shine some light on this.

[–] drdaeman@lemmy.zhukov.al 2 points 1 year ago* (last edited 1 year ago) (4 children)

I don't mean to discourage you - this is a good hack. But as a seasoned code monkey, I see a few things that can be possibly improved so they will have less chances of biting you in the future. Please feel free to disregard this, of course.

s/500/…/g

This is a bit overbroad, as it replaces any “500” in those files. It works now, as this is probably only occurrence is the limit you want to tweak, but it’s a crude approach that may inadvertently break at any moment.

docker exec

Those changes are ephemeral and won’t survive if container is re-created for any reason (unless /opt/mastodon is a volume - I guess this is how it survives docker container restart?). I would rather recommend building your own custom image. Start by making a patch file:

docker run -it --rm -user root <mastodon image> bash
cp -r /opt/mastodon /opt/mastodon.vanilla
sed <your-updates-here> # or you can run vi or nano or any other editor
diff -urN /opt/mastodon.vanilla /opt/mastodon
exit

Take diff’s output, save it to fix-limits.patch in a new empty directory, then write a brief Dockerfile next to it, that goes like this:

FROM <base-mastodon-image>
COPY fix-limits.patch ./
RUN patch -p2 fix-limits.patch

And finally run docker build -t my-mastodon . and use my-mastodon as a replacement image. This will ensure your changes will persist, plus you’ll have a proper patch file that you can use with any version (point is, it will warn you if something would change in a way that the patch would no longer apply cleanly).

I’m writing this on a phone, from scratch, without any testing, so you may need to tweak things a little bit. E.g. I’m not sure what’s the WORKDIR in the base image - just assuming its /opt/mastodon (which it probably is), but you may need to edit the COPY command’s second argument and/or -p parameter to patch.> docker container restart

[–] drdaeman@lemmy.zhukov.al 2 points 1 year ago

Some apps have hardcoded assumptions about the paths, making those kind of setup harder to achieve (you’ll have to patch the apps or do on-the-fly rewrites).

Then there’s also potential cookie sharing/collision issue. If apps don’t set cookies for specific paths, they may both use same-named cookie and this may cause weird behavior.

And if one of the apps is compromised (e.g. has an XSS issue) it’s a bit less secure with paths than with subdomains.

But don’t let me completely dissuade you - paths are totally valid approach, especially if you group multiple clisely related things together (e.g. Grafana and Prometheus) under the same domain name.

However, if you feel that setting up a new domain name is a lot of effort, I would recommend you investing some time in automating this.

[–] drdaeman@lemmy.zhukov.al 11 points 1 year ago

I don't think XMPP comparison is correct.

First, in my personal (subjective!) opinion, XMPP died because of entirely different primary reason: it, by design, had trouble working on mobile devices. Keeping the connection was either battery-expensive or outright impossible, and using OS native push notifications had significant barriers.

As for Google Talk - it just came and went. Because they never had proper MUCs (multi user conferences, think communities), in my own (again, personal, thus subjective - not objective!) experience it was quite the opposite to how the article paints it. Whoever participated in chatrooms I've been in, and had used a Google account, hated Google's decision and moved to XMPP. I'm no fond of Google, but their impact on XMPP was not strictly negative - they contributed some useful XEPs and useful free software libraries after all. Although, of course, for those who used XMPP primarily as a classic messenger system (like MSN, AIM or ICQ) for private 1:1 chats things surely looked differently.

Now, why I think the comparison is not correct. I think Threads' situation is different because of fundamental differences in how those systems operate. And not in favor of Threads/Meta. If Threads would be Lemmy or XMPP MUC-like system (that is, having communities/groups hosted on particular servers), then it would be a complicated story, where Fediverse could even theoretically score a net win. But as I get it, Threads is Mastodon/Twitter-like thing, and their users' content will stay with Meta, entirely at Meta's discretion whenever they let other systems access it, and when they pull the plug. Given that Meta is also not likely to contribute to FLOSS Fediverse projects, their Fediverse presence is of questionable benefits to say the least.

view more: next ›