• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle

  • The counterpoint being that a centralized organization introduces checks, balances, and recovery methods for some losses. If your credit card gets stolen and charged or your bank suddenly becomes insolvent, you have a significant chance that your money will be able to be recovered. Compare that to cryptocurrency, where your wallet information being compromised or a crypto exchange you have assets in going under leaves you at a complete loss and entirely devoid of recourse. Centralized systems have many issues, obviously, as Visa seems to be on an endless crusade to make everyone supremely aware of, but at the same time cryptocurrency being an alternative doesn’t make it a valuable or viable alternative.





  • While it’s true that EVs can be built with fewer moving parts in the drive system itself, and that companies could absolutely produce longer lasting vehicles if they focused on longevity, there are still a lot of parts of a vehicle that simply will not last beyond a certain point. The moving parts of an EV still cover everything in the suspension, wheels/brakes/steering, and a number of other components that are very costly to replace, not to mention the underlying frame/unibody of the vehicle itself being vulnerable to wear over time depending on the conditions it’s driven in. “The few moving parts that wear out” still covers a huge swath of a vehicle, even if you take the engine and transmission out of the equation.

    Well-built EVs with a focus on longevity and repairability could extend the lifespan of the average people mover by a great deal, but at the end of the day cars will by nature eventually reach a point where the cost to repair some major core component becomes too great to justify, outside of rare or collectable cases.


  • This is something I think a lot of people don’t get about all the current ML hype. Even if you disregard all the other huge ethics issues surrounding sourcing training data, what does anybody think is going to happen if you take the modern web, a huge sea of extremist social media posts, SEO optimized scams and malware, and just general data toxic waste, and then train a model on it without rigorously pushing it away from being deranged? There’s a reason all the current AI chatbots have had countless hours of human moderation adjustment to make them remotely acceptable to deploy publicly, and even then there are plenty of infamous examples of them running off the rails and saying deranged things.

    Talking about an “uncensored” LLM basically just comes down to saying you’d like the unfiltered experience of a robot that will casually regurgitate all the worst parts of the internet at you, so unless you’re actively trying to produce a model to do illegal or unethical things I don’t quite see the point of contention or what “censorship” could actually mean in this context.