• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • There’s already a ton of great examples which I can relate (I’ve been using linux since 1998 or 99) but maybe the biggest difference today, apart from that everything is SO MUCH EASIER now, is that the internet wasn’t really the thing it is today. Specially the bandwidth. It took hours and hours over the phone line to download anything, on a good day you could get 100MB just under 4 hours. Of course things were a lot smaller too back then, but it still took ages and I’m pretty sure I now have more bandwidth on my home connection than most of the local universities had back in the 90s.


  • Back when CRT monitors were a thing and all this fancy plug’n’play technology wasn’t around you had modelines on your configuration files which told the system what kind of resolutions and refresh rates your actual hardware could support. And if you put wrong values there your analog and dumb monitor would just try to eat them as is with wildly different results. Most of the time it resulted just in a blank screen but other times the monitor would literally squeal when it attempted to push components well over their limits. And in extreme cases with older monitors it could actually physically break your hardware. And everything was expensive back then.

    Fun times.



  • I want to prevent myself from reinstalling my system.

    Any even remotely normal file on disk doesn’t stop that, regardless of encryption, privileges, attributes or anything your running OS could do to the drive. If you erase partition table it’ll lose your ‘safety’ file too without any questions asked as on that point the installer doesn’t care (nor see/manage) on individual files on the medium. And this is exactly what ‘use this drive automatically for installation’ -option does on pretty much all of the installers I’ve seen.

    Protecting myself from myself.

    That’s what backups are for. If you want to block any random usb-stick installer from running you could set up a boot options on bios to exclude those and set up a bios password, but that only limits on if you can ‘accidently’ reinstall system from external media.

    And neither of those has anything to do on read/copy protection for the files. If they contain sensitive enough data they should be encrypted (and backed up), but that’s a whole another problem than protecting the drive from accidental wipe. Any software based limitation concerning your files falls apart immediately (excluding reading the data if it’s encrypted) when you boot another system from external media or other hard drive as whatever solution you’re using to protect them is no longer running.

    Unless you give up the system management to someone else (root passwords, bios password and settings…) who can keep you from shooting yourself on the foot, there’s nothing that could get you what you want. Maybe some cloud-based filesystem from Amazon with immutable copies could achieve that, but it’s not really practical on any level, financial very much included. And even with that (if it’s even possible in the first place, I’m not sure) if you’re the one holding all the keys and passwords, the whole system is on your mercy anyways.

    So the real solution is to back up your files, verify regularly that backups work and learn not to break your things.




  • Then do sudo apt install xfce4 and sudo apt purge cinnamon* muffin* nemo*.

    It’s been a while since I installed xfce4 on anything, but if things haven’t changed I think the metapackage doesn’t include xfce4-goodies and some other packages, so if you’re missing something it’s likely that you just need to ‘apt install xfce4-whatever’. Additionally you can keep cinnamon around as long as you like as a kind of a backup, just change lightdm (or whatever login manager LMDE uses) to use xfce4 as default. And then there’s even lighter WM’s than XFCE, like LXDE, which is also easy to install via apt and try out if that works for you.


  • I understand the mindset you have, but trust me, you’ll learn (sooner or later) a habit to pause and check your command before hitting enter. For some it takes a bit longer and it’ll bite you in the butt for few times (so have backups), but everyone has gone down that path and everyone has fixed their mistakes now and then. If you want hard (and fast) way to learn to confirm your commands, use dd a lot ;)

    One way to make it a bit less scary is to ‘mv <thing you want removed> /tmp’ and when you confirmed that nothing extra got removed you can ‘cd /tmp; rm -rf <thing>’, but that still includes the ‘rm -rf’ part.


  • IsoKiero@sopuli.xyztoLinux@lemmy.mlLinux on old School Machines?
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    29 days ago

    Absolutely. Maybe leave Gnome/KDE out and use a lighter WM, but they’ll be just fine. Specially if they have 8GB or more RAM. I suppose those have at least dual core processors, so that won’t be a (huge) bottleneck either. You can do a ton of stuff with those beyond just web browsing, like programming/text editing/spreadsheets and so on. I’d guess that available RAM is the biggest bottleneck on what they can do, specially if you like to open a ton of tabs on your browser.


  • Make sure you have package alsa-utils installed and try to run alsamixer. That’ll show all the audio devices your system detects. Maybe you’re lucky and it’s just that some volume control is muted and if you’re not it’ll give you at least some info to work with. Majority of audio devices don’t need any additional firmware to work and they almost always work out of the box just fine. What’s the hardware you’re running? Maybe it is something exotic which isn’t installed by default (which I doubt).

    And additionally, what you’re trying to play audio from? For example MP3’s need non-free codecs to be installed and without them your experience is “a bit” limited on audio side of things.


  • They both use upstream version number (as in the number software developer gave to the release). They might additionally have some kind of revision number related to packaging or some patch number, but as a rule of thumb, yes, the bigger number is the most recent. If you should use that as a only variable on deciding which to install is however another discussion. Sometimes dpkg/apt version is preferred over snap regardless of version differences, for example to save a bit of disk space, but that depends on a ton of different things.




  • I’m tempted to say systemd-ecosystem. Sure, it has it’s advantages and it’s the standard way of doing things now, but I still don’t like it. Journalctl is a sad and poor replacement from standard log files, it has a ton of different stuff which used to be their separate own little things (resolved, journald, crontab…) making it pretty monolithic thing and at least for me it fixed a problem which wasn’t there.

    Snapcraft (and flatpack to some extent) also attempts to fix a non-existing problem and at least for me they have caused more issues than any benefits.


  • The command in question recursively changes file ownership to account “user” and group “user” for every file and folder in the system. With linux, where many processes are run as root and on various other accounts (like apache or www-data for web server, mysql for MySql database and so on) and after that command none of the services can access the files they need to function. And as the whole system is broken on a very fundamental level changing everything back would be a huge pain in the rear.

    On this ubuntu system I’m using right now I have 53 separate user accounts for various things. Some are obsolete and not in use, but majority are used for something and 15 of them are in active use for different services. Different systems have a bit different numbers, but you’d basically need to track down all the millions of files on your computer and fix each of their permission by hand. It can be done, and if you have similar system to copy privileges from you could write a script to fix most of the things, but in vast majority of cases it’s easier to just wipe the drive and reinstall.




  • I ran one for a while. In Finland legislation is a bit different, so I wasn’t worried about breaking the law or getting sued, but my ISP got in touch pretty quickly. They were professionals and understood the situation when I explained why my traffic might look “a bit” suspicious and I attempted to clean up bad actors from the traffic with filtering and whatnot, but eventually ISP got enough complaints and they were pretty much forced to tell me that either I shut the exit node down or they’ll cut my line.

    As I said, they were very professional about it, and managed the whole experiment as good as I ever could have hoped, but my agreement with them has an option that if I’m letting malware and bad actors leave the network even after warnings they can shut the connection down. And that’s understandable, I suppose they have similar agreements with other providers and they received all the abuse mail my exit node was causing, so I’m still a happy customer with them even if they eventually took the hard way.

    I’m still pretty sure it would be possible to run filtered exit node, but it would require far more time and other resources that I’m willing to spend on a project like that and I’m not sure if a single person is enough for it anyways.

    So, yes, do your homework and be careful. Legislation plays a significant part (depending on where you live), but your ISP most likely won’t like it either.


  • I did that for a while to try and learn about filtering malicious traffic from the network. Doing that long term would definetly change my life, but very much not in a good way. It’s a endless whack-a-mole game and the winning prize is that your ISP doesn’t give you a call weekly.

    It took couple of weeks until the ISP first called and told me that I have malicious traffic coming from my IP. I explained the situation and their representative was very understanding and handled the thing as well as he ever could. I tried to adjust filters, blocklists and all the jazz which was pretty much a full time job already and I still couldn’t make it work on a sufficient level. I got another couple of calls from ISP (again, handled spectaculary considering I was pushing several hundreds Mbps dirty traffic out in the wild) and eventually they just plainly said that they’re forced to kill my connection if situation doesn’t improve. I ran a node without exit for a while but as that’s not a interesting thing to run I eventually shut it down to free resources for more interesting things.

    If you have the time and knowledege to do that, I really encourage that, but for me it was too much to keep in the network while trying to maintain some sanity on my everyday life. I firmly believe that my goal of filtering malicious traffic out and keeping an exit node runnig is achievable goal, I just don’t have enough knowledge nor time to gain enough of it to keep exit node running.

    And of course there’s legal issues as well and severity of them heavily depends on where you’re living, so really do your homework before doing anything like that.