this post was submitted on 04 Jul 2023
348 points (97.0% liked)

Privacy

32165 readers
129 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] ggnoredo@lemm.ee 6 points 1 year ago (1 children)

It's better than google but It's not good. Only options is self hosted searxng

[–] Vexz@feddit.de 4 points 1 year ago (1 children)

Been using a self hosted instance of SearXNG but recently went away from SearXNG in gerneral. Why? Search results more than often enough ended in timeouts from the search engines. It was frustrating and they never fixed it. In terms of privacy it's top notch though.

[–] somedaysoon@lemmy.world 2 points 1 year ago (2 children)

I went away from it because it seemed to have a memory leak and the docker container would eventually crash. I never truly investigated what was causing it.

I've been meaning to give whoogle a try:

https://github.com/benbusby/whoogle-search

[–] Vexz@feddit.de 3 points 1 year ago (1 children)

That's weird, I never had memory leak problems.

I don't like Whoogle because of their UI for image searches. Imo it's really bad but that's just my opinion. The image search is also the reason why I don't use Brave Search because it redirects you to Google or Bing. What's the point in being "a privacy respecting search engine" when you get redirected to Google and Bing which are the worst search engines in terms of privacy?

[–] somedaysoon@lemmy.world 1 points 1 year ago (1 children)
[–] Vexz@feddit.de 1 points 1 year ago (1 children)

None - I deleted it because I don't use it anymore. It wasn't much though and it never bloated, even when running for over a whole month.

[–] somedaysoon@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

It wasn’t much though and it never bloated, even when running for over a whole month.

Did you actually check on it, or did you just not notice a problem so you're stating this? No offense, but you haven't verified anything, and seem to just be recalling there wasn't a problem.

I mean, what is "not much," to you? Because I asked someone else what it was using who also thought it was running well for them, and after only 1 day of uptime it was idle at 350MB of RAM... which is way too high for an idle search engine in my opinion. Another thing, if you were running it in a docker container and had it set to restart=unless-stopped it could have been restarting without you even knowing about it.

[–] Vexz@feddit.de 0 points 1 year ago (1 children)

No, I didn't check because it never got so big to the point where it would become suspicious to me that something might be wrong. Maybe it uses more RAM than other self-hosted search engines but it never leaked memory so it used more and more RAM the longer the instance ran. Just because it uses much RAM it doesn't mean it's leaking memory. It might just be developed badly, not very caring about your ressources.

[–] somedaysoon@lemmy.world 2 points 1 year ago (1 children)

No, I didn’t check because it never got so big to the point where it would become suspicious to me that something might be wrong.

And that's all I needed to know, that is exactly what I thought, you didn't take the time to verify anything.

[–] Vexz@feddit.de 1 points 1 year ago* (last edited 1 year ago) (1 children)

Yes, because as I said it's not a memory leak. I would have noticed a memory leak because I keep an eye on the ressources of my NAS (on which my SearXNG instance was hosted) and I didn't notice an usual growing consumption of my RAM. I just didn't check the consumption of RAM on each individual container that was running. I would have done that if I would have noticed an unusual consumption of RAM.

Look, I can't give you more detailled information than this because it's all from my memories. All I wanted to do was to help as good as I can by answering from my experiences. If that doesn't help or is inappropriate to you I'm sorry. I didn't want to offend you or anything like that.

[–] somedaysoon@lemmy.world 0 points 1 year ago* (last edited 1 year ago) (1 children)

It doesn't help, because I can't trust it's accurate, and neither should you. As I said, you can't recall what the container was using, you can't tell me any specific numbers like what you might consider to be too high, so how do you expect me to trust that you definitely, for sure would have noticed something that might have been as small as a 300MB swing?

Do you understand how this is not helpful in any way now?

[–] Vexz@feddit.de 1 points 1 year ago (1 children)

Do you understand how this is not helpful in any way now?

No, because you're just talking about how much RAM SearXNG consumes. In your original post you were complaining about a memory leak. I monitor the ressources of my NAS. Even after a restart of my NAS where every container was freshly started the RAM consumption wasn't higher than a month later after the restart (without restarting any of the docker containers). And that is why I can with full conviction tell that my SearXNG instance didn't leak memory.

This discussion is going nowhere from here so I'm gonna stop responding. I said everything to make my point as clear as possible. Have a nice day.

[–] somedaysoon@lemmy.world 1 points 1 year ago

You don't understand my point... I'm not going to trust someone that doesn't know about what it was running at, which is why I asked for specifics on how much RAM it was using. I'm not trusting some random that says, "yeah bro, totally cool, not a problem even though I have no idea about what the RAM was at or even what I would consider to be high, but trust me bro, it's okay. I am soooo vigilante on RAM usage on my server despite having no clue about the RAM usage, but trust me bro, I would know if it was leaking."

Even after a restart of my NAS where every container was freshly started the RAM consumption wasn’t higher than a month later after the restart (without restarting any of the docker containers). And that is why I can with full conviction tell that my SearXNG instance didn’t leak memory.

No, you can't, not unless you specifically paid attention to it, which you didn't, and that is clear by you not knowing how much it was using. It's possible that every time you checked your overall usage was just after a crash and the container restarting.

That's the reason I asked you to give me specifics on the RAM... because without you being able to provide it, ruled out whether I trusted your opinion.

[–] DocBarkowitz@lemm.ee 2 points 1 year ago (1 children)

Interesting, my instance has been rock solid. When did you give it a try?

[–] somedaysoon@lemmy.world 1 points 1 year ago (1 children)

It was probably about a year ago now, so I probably should spin it up again. I think I'll do that today.

How much RAM does yours use?

[–] DocBarkowitz@lemm.ee 3 points 1 year ago (1 children)

I’m mobile for the holiday right now, I’m not 100% sure on utilization. But it’s running on a 2 core 4 GB Ubuntu machine along with Caddy and Redis. I also have a Lemmy instance on that same machine so Lemmy, Lemmy-UI, Postgres, and pictr all fit on that machine and work for my daily use.

I can get exact utilization later tonight.

[–] DocBarkowitz@lemm.ee 2 points 1 year ago (1 children)
docker stats --no-stream
CONTAINER ID   NAME               CPU %     MEM USAGE / LIMIT    MEM %     NET I/O           BLOCK I/O         PIDS
dd2a774ad1a6   lemmy_lemmy-ui_1   0.00%     42.5MiB / 3.82GiB    1.09%     418kB / 7.24MB    2.65MB / 0B       15
718629b5514f   lemmy_lemmy_1      0.03%     6.82MiB / 3.82GiB    0.17%     1.52MB / 1.48MB   864kB / 0B        5
0c944dccc1e1   lemmy_postfix_1    0.00%     4.762MiB / 3.82GiB   0.12%     3.74kB / 0B       0B / 762kB        7
7f939790561c   lemmy_postgres_1   0.00%     46.45MiB / 3.82GiB   1.19%     1.09MB / 1.44MB   24.6kB / 2.16MB   9
14c7db5ae7ec   lemmy_pictrs_1     0.08%     23.36MiB / 690MiB    3.39%     3.81kB / 0B       0B / 0B           13
3695b8a0b67a   caddy              0.00%     9.984MiB / 3.82GiB   0.26%     0B / 0B           34.1MB / 12.3kB   9
12c8bd7c1cdf   redis              0.21%     3.555MiB / 3.82GiB   0.09%     101kB / 78.8kB    7.06MB / 0B       5
f03c3298de46   searxng            0.01%     349.9MiB / 3.82GiB   8.94%     9.21MB / 3.82MB   61.4MB / 61.4kB   25

I guess it is the largest consumer of memory. Unfortunately I rebooted yesterday, while setting up lemmy I noticed there was a decent amount of OS security updates. Otherwise I probably would have had stats from like 6 months of uptime. I'll keep an eye on it and see if it balloons.

[–] somedaysoon@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

Thanks for reporting back on it.

Yeah, even that, I just don't understand why a search engine, sitting idle, consumes that much memory. The largest consumers of memory for me are: Omada Controller, Airsonic-Advanced, HomeAssistant, Lidarr, Sonarr, Paperless... all things that process a lot of data, or are written in Java, so it's to be expected they use more resources.

I would see it grow to 600MB+ and occasionally crash, so I just decided to use public instances.