I am hosting more than 10 services currently but only Nextcloud sends me errors periodically and only Nextcloud is super extremely painfully slow. I quit this sh*t. No more troubleshooting and optimization.
There are mainly 4 services in Nextcloud I’m using:
Files
: as simple server for upload and download binariesCalendar
(with DAVx5): as sync server without web UINotes
: simple note-takingNetwork folder
: mounted on Linux dolphin
Could you recommend me the alternatives for these? All services are supposed to be exposed by HTTPS, so authentication like login is needed. And I’ve tried note-taking apps like Joplin or trillium but couldn’t like it.
Thanks in advance.
If you’re having issues with NextCloud being slow and having errors, it’s probably because the machine you are running it on is low on RAM and/or CPU.
I bring this up because what ever replacements you try would likely have the same issues.
My NextCloud instance was nearly unusable when I had it on a Raspberry PI 3, but when I moved it to a container on my faster machine (AMD Ryzen 7 4800U with 16GB of ram) it now works flawlessly.
The backing database type and the storage it runs on are just as important too.
I agree with this. It needs a good amount of CPU cycle and RAM. Raspi struggled for me too.
My NC instance runs on a 24GB RAM, 4 CPU Ampere A1 host(Oracle), and still struggles. YMMV.
And it struggles as a photo backup host an i5-7xxx and 16GB RAM at home.
It’s not absurdly slow, it’s just…irritating sometimes.
Yeah, Ive got this in my setup as well and its been pretty slow. I thought it was a network thing because I’m currently using Tmobile home internet but switching to a fiber optic network with 500Mbps up and down soon. Im really hoping that changes things
There are performance tuning tweaks you can do on NextCloud like memory caching etc.
Ooo Lovely! I’ll look into that!
Whta db are you using
Postgres.
Also using redis, did all the typical perf checks listed on NC site etc.
Experiencing the same, a good CPU and lots of RAM would resolve the issue
Even if you ran a basic sqlite nexcloud, if properly optimized, you can deal with millions of files like its nothing. And that is the issue, the bugs and lacking optimization…
4650g + 64GB ram + Mysql and it was file locking on just a 21k 10GB folder constantly.
I have written apps (in Go) that do similar and process data 100 times faster then nextcloud. Hell, my scrapers are faster then nextcloud in a local netwerk, and that is dealing with external data, over the internet.
Its BADLY designed software that puts the blame on the consumer to get bigger and better hardware, for what is essentially, early 2000 functionality.
Mysql and it was file locking on just a 21k 10GB folder constantly
It’ll definitely do that if you keep your database on a network share with spinning disks.
Spin up a container with sqlite in a ram disk and point it to your same data location. Most of the problems go away.
It’ll definitely do that if you keep your database on a network share with spinning disks.
Database and Nextcloud where on a 4TB NVME drive … in Mysql with plenty of cache/memory assigned to it. Not my first rodeo, …
I’m running on an SSD as a VM on 10yr old laptop and have had very few issues compared to running on Raspis in the last. It’s not my first rodeo either and found Debian with NexCloudPi setup script worked the best, then restore from backup. The WebUI is performing great as well as bookmarks, contacts, calendar, video chats and most things I’ve thrown at it. NVME may be overkill but the combination of solid CPU, RAM and Disk IO should alleviate any problems. My hunch is there are other resource constraints or bottlenecks at play, perhaps DDOS or other attacks (experienced that for sure and you can test by dropping your firewall ingress rules to confirm).
Also, this is FOSS and I find the features and usability are better than anything else out there, especially with Letsencrypt.
I was on the same boat when I was running NC on a container. I switched to VM, and most of my issues have been resolved, but collabora. I am currently using the built-in collabora server, which is slow.
- Syncthing for files.
- Proton calendar (so not self hosted)
- Joplin, using file based sync with aforementioned syncthing. I saw you didn’t like it though.
- I occasionally use scp
For calendaring, I also went with the option of syncthing via DecSync. I can get my contacts and calendar on Android and Thunderbird, so I can avoid yet another unnecessary webapp.
This does look cool! But I notice that there’s really only one contributor (technically two, but the second only did one tiny commit) and they haven’t contributed any code in over a year. I don’t want to invest too much time migrating to a stale if not dead project.
Honestly, I think that the lack of commits is more due to the application being feature complete than “dead”. I’ve been using it for at least 3 years now and it works quite well.
That’s a fine point! You talked me in to checking it out. Thanks for the recommendation!
Sorry to hear you’ve had a bad experience. I’ve been running the lsio Nextcloud docker container for 4 years without any issues at all.
What exactly have you tried to do to address your nextCloud problems?
Same and looking forward to the responses here. Nextcloud is too big and complicated. I deployed Immich to cover for the photo library. Still looking for a good solution for notes though.
I have my issues with Nextcloud, but it’s still, by far, the best solution I’ve come across.
Nextcloud is a large and complex application, it does need more resources than one would probably expect.
Nextcloud macOS and iOS apps make running it a no-brainer for me. Nextcloud on macOS works better than Google Drive.
I used Nextcloud + Samba by the side for awhile, these days I use Samba exclusively, mounting takes basically no time whatsoever and syncthing for synchronization stuff
Owncloud.
I personally never caught the Nextcloud hype, and stuck with the original. So far I’ve heard (and seen, having tried it twoce) nothing but trouble from Nextcloud while my Owncloud install continues to be rock solid for going on 10 years (regularly updated, of course!).
Dunno, running my nextcloud for a long time now, even updating the lazy way over the web UI and not the suggested CLI, not even once had a problem that was Nextclouds fault.
I always recommend OwnCloud. It even has a raw photo viewer plugin and if you know anything about RAW 24 megapixel photos, they are tough to load. But with owncloud a folder full of 30 pictures loads within 10-15 seconds
Same. I ran OwnCloud and Nextcloud in parallel for a while until a Nextcloud update nuked it and my wife lost some of her college work.
After that I’ve appreciated the slower more deliberate pace of OwnCloud
I personally never caught the Nextcloud hype
The “hype” being simply Nextcloud not being OwnCloud which turned proprietary, no?
Owncloud is not proprietary (it’s AGPLv3) and I’m really not sure where people get that idea.
The original Nextcloud/Owncloud fork was due to disagreements in development direction, not (say) like Jellyfin/Emby where there was actually a license change. Nextcloud wanted to “move fast”, Owncloud wanted stability. There was potential concern around the time of the fork that, perhaps, hypothetically, some day, Owncloud might “go proprietary”, but going on close to 10 years that has not happened.
This is concerning to me because I’ve been considering ditching Synology and spinning up nextcloud. I like Synology drive but I’m tired of the underpowered hardware and dumb roadblocks and vendor lock-in nonsense. I’m very curious what you end up doing!
A confirmed, yet still not resolved bug caused me and about 200 other people lose data (metadata) for tons of files. Well, at least 200 reacted to the GitHub bugreport I filled. I think you can easily find it because it’s the most upvoted yet unresolved issue.
Besides this, it’d often give random errors and just not function properly. My favorites are the unexplained file locks: My brother in Christ, what do you mean error while deleting a file. It’s 2023 holy shit, just delete the damn file. It’s ridiculously unreliable and fragile. They have tons, thousands of bugreports open - yet they focus on pushing new, unwanted social features to become the new facebook and zoom. They definitely should focus on fixing the foundation first.
Do you have a link to that bugreport?
Thanks!
Nextcloud is great. I don’t doubt that OP is having problems, and I understand how frustration can set in and one might throw in the towel and look for alternatives, but OP’s experience is atypical. I’ve been running it for years without any issues. I should point out that I only use it for small-scale personal stuff, but it’s good for me. I have it syncing on eight devices, including Linux, MacOS, and Windows desktops; Android phone; iPad; Raspberry Pi. My phone auto-uploads new camera photos. I’m using WebDAV/Fuse mounts on some machines. Everything is solid.
Not OP, but I run it on docker with postgres and redis, behind a reverse proxy. All apps on NC have pretty good performance and haven’t had any weird issues. It’s on an old xeon with 32gb and on spinning rust.
Do you have redis talking to nextcloud over the unix socket or just regular TCP? The former is apparently another way to speed up nextcloud, but I’m struggling to understand to get containers using the unix socket instead.
I have both Postgres and Redis talking to Nextcloud through their respective unix sockets; I store the sockets in a named volume, so I can mount it on whatever containers need to reach them.
Do you mind sharing your docker config, so I can try and replicate it. Thank you
Sure:
POSTGRES
--- version: '3.8' services: postgres: container_name: postgres image: postgres:14-alpine environment: POSTGRES_PASSWORD: "XXXXXXXXXXXXXXXX" PGDATA: "/var/lib/postgresql/data/pgdata" volumes: - type: bind source: ./data target: /var/lib/postgresql/data - type: volume source: postgres-socket target: /run/postgresql logging: driver: json-file options: max-size: 2m restart: unless-stopped networks: default: external: name: backend volumes: postgres-socket: name: postgres-socket
REDIS
--- version: '3.8' services: redis: image: redis:7.2-alpine command: - /data/redis.conf - --loglevel - verbose volumes: - type: bind source: ./data target: /data - type: volume source: redis-socket target: /var/run logging: driver: json-file options: max-size: 2m restart: unless-stopped networks: default: external: name: backend volumes: redis-socket: name: redis-socket
Here’s redis.conf, it took me a couple of tries to get it just right:
# create a unix domain socket to listen on unixsocket /var/run/redis/redis.sock unixsocketperm 666 # protected-mode no requirepass rrrrrrrrrrrrr bind 0.0.0.0 port 6379 tcp-keepalive 300 daemonize no stop-writes-on-bgsave-error no rdbcompression yes rdbchecksum yes # maximum memory allowed for redis maxmemory 50M # how redis will evice old objects - least recently used maxmemory-policy allkeys-lru # logging # levels: debug verbose notice warning loglevel notice logfile "" always-show-logo yes
NEXTCLOUD
--- version: '3.8' services: nextcloud: image: nextcloud:27-fpm env_file: - data/environment.txt volumes: - type: bind source: ./data/html target: /var/www/html - type: volume source: redis-socket target: /redis - type: volume source: postgres-socket target: /postgres - type: tmpfs target: /tmp:exec - type: bind source: ./data/zz-docker.conf target: /usr/local/etc/php-fpm.d/zz-docker.conf - type: bind source: ./data/opcache_cli.conf target: /usr/local/etc/php/conf.d/opcache_cli.conf networks: - web - backend logging: driver: json-file options: max-size: 2m restart: unless-stopped crond: image: nextcloud:27-fpm entrypoint: /cron.sh env_file: - data/environment.txt volumes: - type: bind source: ./data/html target: /var/www/html - type: bind source: ./data/zz-docker.conf target: /usr/local/etc/php-fpm.d/zz-docker.conf - type: volume source: redis-socket target: /redis - type: volume source: postgres-socket target: /postgres - type: tmpfs target: /tmp:exec networks: - web - backend logging: driver: json-file options: max-size: 2m restart: unless-stopped collabora: image: collabora/code:23.05.5.4.1 privileged: true environment: extra_params: "--o:ssl.enable=false --o:ssl.termination=true" aliasgroup1: 'https://my.nextcloud.domain.org:443' cap_add: - MKNOD networks: - web logging: driver: json-file options: max-size: 2m restart: unless-stopped networks: backend: external: name: backend web: external: name: web volumes: redis-socket: name: redis-socket postgres-socket: name: postgres-socket
The environment.txt file is hostnames, logins, passwords, etc…
POSTGRES_DB=nextcloud POSTGRES_USER=xxxxxxx POSTGRES_PASSWORD=yyyyyyyyyyyyyyyyyyy POSTGRES_SERVER=postgres POSTGRES_HOST=/postgres/.s.PGSQL.5432 NEXTCLOUD_ADMIN_USER=aaaaa NEXTCLOUD_ADMIN_PASSWORD=hhhhhhhhhhhhhhhhhhh REDIS_HOST=redis REDIS_HOST_PORT=6379 REDIS_HOST_PASSWORD=rrrrrrrrrrrrr
The zz-docker.conf file sets some process tuning and log format, some might not even be necessary:
[global] daemonize = no error_log = /proc/self/fd/2 log_limit = 8192 [www] access.log = /proc/self/fd/2 access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{mili}d %{kilo}M %C%%" catch_workers_output = yes decorate_workers_output = no clear_env = no user = www-data group = www-data listen = 9000 listen = /var/www/html/.fpm-sock listen.owner = www-data listen.group = www-data listen.mode = 0666 listen.backlog = 512 pm = dynamic pm.max_children = 16 pm.start_servers = 6 pm.min_spare_servers = 4 pm.max_spare_servers = 6 pm.process_idle_timeout = 30s; pm.max_requests = 512
The opcache_cli.conf file has a single line:
opcache.enable_cli=1
I don’t remember why it’s there but it’s working so I’m not touching it :-D
Good luck :-)
I dumped synology and just use proxmox for the automatic ZFS support, then I can run my apps in either containers or VMs and even do GPU passthrough if needed.
Also not OP. I run nextcloud on 10th gen i3 on spinning rust and performance is good. I run it on LXC container though so without docker
How did you Spin it up in an LXC Container? I cant find any install Tutorials or Files for that. Do you have a link or something for me?
I create LXC container and then just install apache2, php and mariadb by hand with apt, then I install nextcloud from sources.
You can try this tutorial as its very close to what I did: https://docs.nextcloud.com/server/latest/admin\_manual/installation/example\_ubuntu.html
PSA: saying “I run Nextcloud and don’t have any problems” doesn’t help anyone or contribute anything useful to the conversation. It just makes you look like an insecure fanboy.
The OP is exactly the same but in reverse. I haven’t had any issues but using MariaDB instead of default SQL.
Disagree, seeing as OP has not posted anything other than “I run Nextcloud and have problems”, providing a counter is straightforward and expected.
But they didn’t ask for help making nextcloud better, they asked for alternatives.
Well, the comments were helpful to me, in trying to determine if I want to put effort into setting up Nextcloud. A post full of alternatives, with people saying that Nextcloud is buggy? Obviously, look at the alternatives.
A post full of comments saying “you shouldnt have those issues, want some help troubleshooting your config” and a couple alternatives? Probably worth looking into Nextcloud rather than writing it off.
Don’t bother the Nextcloud hivemind is too strong.
No, it makes you look insecure about your objectivity. Spreading FUD about a FOSS project isn’t helpful, and it’s usually down to misconfiguration or poor hardware that it doesn’t run properly.
I see plenty of folks who think they’ve got Redis setup but are following crap guides, so it isn’t working.
I use linuxserver.io’s nextcloud docker image. While I’ve seen people struggle to setup Nextcloud properly to the point of just giving and installing the snap version of it, I can count the number of times I’ve needed to do manual interventions for nextcloud with LSIO’s nextcloud image. It works like a charm.
Second this. Running on portsinet with the images. Absolutely breeze with 8gb ram and 2tb ssd
I just installed it baremetal, works like a charm.
lSIO is amazing, my first stop for container browsing! Followed in second place by hotio.dev
Went from nextcloud to FileBrowser for web files access, with resilio/syncthing under the hood for synchronisation. My family couldn’t be happier, but yeah - we are not using calendar futures.
I love idea of Nextcloud, but its overall concept of doing everything, but nothing well enough was one of the reasons I’ve decided to build S3Drive. We squeeze most of the “file-management” experience out of the protocol itself. That means that all you need to self-host is the S3 storage server (e.g. MinIO)… but if you don’t feel like it just yet you can buy S3 from anyone else (e.g. Backblaze / Wasabi / Synology / Cloudflare etc.) and enable 100% Rclone compatible E2E encryption to protect your privacy.