• 0 Posts
  • 6 Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle

  • I think you are missing the point how easy is to fuck things up in a console

    No i think you are. Why should a beginner ever even touch the CLI? You can also SSH into the synology and fuck things up.

    Using a ‘friendly environment’ like synology is not gurantee to not fuck things up.

    Installing truenas when having no idea about almost anything is cumbersome, dealing with the millions options (some of them incompatible between them) is frustrating, cryptic error codes are discouraging…

    What millions of options? You select a drive, and set a password and your done? 1 Set fewer then on synology.

    You brought up TrueNas. TrueNas for example also gives you safe boundaries and suggestions how to set up things. Same as synology. There is literally also a setup wizard for backups.

    AND AGAIN just because you follow the synology wizards does not mean your data is safe either. You always can fuck things up if you want to.


  • I see your point but in this world there is only 2 options, or you have the skills, the knowledge and the time to do it by yourself, or you need to outsource it.

    But your not, outsourcing it?! You just choose a proprietary provider for a docker compose file! and some raid configuration. Everything ia still on you to fuck up.

    Assuming that the op is a real noob it is clear that the 2 first prerequisites are missing making that option unacceptable, then you can only go to the buy something easy enough for the general public.

    Reading the Post again from OP, its clear that OP is clearly interessted in learning those things.

    And in top of that, in a homelab, the most sacred thing is the data, not the service, the data. If you misconfigure a nas or the automated backup system it could lead into the worst scenario: the data is lost forever.

    The exact same ia true for you synology NAS. + the limitations on how synology thinks you should do backups vs how it actually suits you.


  • I would absolutely discourage the use of synology and probably any other brand in the NAS realm.

    Synology has pulled of some really scummy things in the last few years with their certified SSDs where only a white list of SSDs could be used in an array or when they tried to push their own HDDa and show warnings and messengers to worry the user that something is wrong. Also they retroactively removed transcoding capabilities from their systems.

    Those Systems are all quite limited for how expensive they are. They are great for just simple things but with the list OP posted, you would be heavily limited and have to jump through hoops in order to have a well functioning home lab/server.


  • I’ve heard AMD’s onboard graphics are pretty good these days, but I haven’t tried AMD CPUs on a server.

    The main issue is afaik still the software support, here are NVIDIA and Intel years ahead.

    The benefit of going with a dGPU is that in a few years when for example maybe AV1 takes even more off, you can just switch the GPU and you’re done and do not have to swap the whole system. That at least was my thinking on my setup. My CPU, a 3600x is still good for another 10 years probably.


  • Do not go for server hardware, used consumer hardware is good enough for you use cases. Basically any machine from the last 5-10 yeare is powerfull enough to handle the load.

    Most difficult decision is on the GPU or transcoding hardware for your jellyfin. Do you want to be power efficient? Then go with a modern but low end intel CPU there you got quicksync as transcoding engine. If not, i would go for a low end NVIDIA GPU like the 1050ti or a newer one, and for example an old AMD CPU like the 3600.

    For storage, also depends on budged. Having a backup of your data is much more important then having redundancy. You do not need to backup your media, but everything that is important to you,lime the photos in immich etc.

    I would go SSD since you do not need much storage, a seperate 500 GB drive for your OS and a 4 TB one for the data. This is much more compact and reduces power consumption, and especially for read heavy applications much more durable and faster inoperation, less noise etc.

    Ofc, HDDs are good enough for your usecase and cheaper (factor 2.5-3x cheaper here) .

    Probably 8-16 GB RAM would be more then enough.

    For any local redundancy or RAID i would always go ZFS.