• 0 Posts
  • 396 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle
  • Not even remotely, and it’s really important to understand a) why there is a difference, and b) why that difference matters, or else you are going to hoover up every bit of propoganda these desperate conmen feed you.

    People are not automated systems, and automated systems are not people.

    Something that people are generally pretty good at is understanding that a process has failed, even if we can’t understand how it has failed. As the adage goes “I don’t need to be a helicopter pilot to see one stuck in a tree and immediately conclude that someone fucked up.”

    LLMs can’t do that. A human and an LLM will both cheerfully produce the wrong answer to “How many Rs in Strawberry.” But a human, even one who knows nothing about cooking, will generally suspect that something might be up when asked to put glue on pizza. That’s because the human is capable of two things the LLM isn’t; reasoning, and context. The human can use their reasoning to draw upon the context provided by their real life experience and deduce that “Glue is not food, and I’ve never previously heard of it being used in food. So something here seems amiss.”

    That’s the first key difference. The second is in how these systems are deployed. You see the conmen trying to sell us all on their “AI” solutions will use exactly the kind of reasoning that you bought - “Hey, humans fuck up too, it’s OK” - in order to convince us that these AI systems can take the place of human beings. But in the process that requires us to place an automated system in the position of a human system.

    There’s a reason why we don’t do that.

    When we use automation well, it’s because we use it for tasks where the error rate on the automated system can be reduced to something far, far lower than that of a well trained human. We don’t expect an elevator to just have a brain fart and take us to the wrong floor every now and then. We don’t expect that our emails will sometimes be sent to a completely different address to the one we typed in. We don’t expect that there’s a one in five chance that our credit card will be billed a different about to what was shown on the machine. None of those systems would ever have seen widespread adoption if they had a standard error rate of even 5%, or 1%.

    Car manufacturing is something that can be heavily automated, because many of the procedures are simple, repeatable, and controllable. The last part is especially important. If you move all the robots in a GM plant to new spots they will instantly fail. If you move the u humans to new spots, they’ll be quite annoyed, but perfectly capable of moving themselves back to the correct places. Yet despite how automatable car manufacturing is, it still employs a LOT of humans, because so many of those tasks do not automate sufficiently well.

    And at the end of the day, a fucked up car is just a fucked up car. Healthcare uses a lot less automation than car manufacturing. That’s not because healthcare companies are stupid. Healthcare is one of the largest industries in North America. They will gladly take any automation they can get. I know this because my line of work involves healthcare companies regularly asking me for automotion. But they also have a very, very low threshold for failure. If one of our systems fails even one time they will demand a full investigation of the failure.

    This is because automated systems, when they are employed, have to be load bearing. They have to be something reliable enough that people can stop thinking about it, even though that same level of reliability isn’t demanded from the human components of these systems.

    This is largely because, generally speaking, humans have much more ability to recognize and correct the failures of other humans. Medical facilities organise themselves around multiple layers of trust and accountability. One of the demands we get most is for more tools to give oversight into what the humans in the system are doing. But that’s because a human is well equipped to recognize when another human is in a failure state. A human can spot that another human came into work hungover. A human can build a context for which of their fellow humans are reliable and which aren’t. Human systems are largely self-healing. High risk work is doled out to high reliability humans. Low reliability humans have their work checked more often.

    But it’s very hard for a human to build context for how reliable an automated system is. This is because the workings of that system are opaque; they do not have the context to understand why the system fails when it fails. In fact, when presented with an automated system that sometimes fails, the way most humans will react its to treat the system as if it always fails. If a button fails to activate on the first press one or two times, you will come back to that same facility a year later to find that it has become common practice for every staff member to press the button five times in a row, because they’ve all been told that sometimes it fails on the first press.

    When presented with an unreliable automated system, humans will choose to use a human instead, because they have assessed that they can better determine when the human has failed and what to do about it.

    And, paradoxically, because we have such a low tolerance for failure in automated systems, when presented with an automated system that will be taking on the work of a human, humans naturally expect that system to be more or less perfect. They expect it to meet the threshold that we tend to set for automated systems. So they don’t check its work, even when when told to.

    The lie that LLMs fuck up in the same way that humans do is used to get a foot in the door, to sell LLM driven systems as a replacement for human labour. But as soon as that replacement is actually being sold, the lie goes away, replaced by a different lie (often a lie by omission); that this will be as reliable as every other automated system you use. Or, at the very least, that “It will be more reliable than a human.” The sellers say this meaning, say, 5% more reliable (in reality the actual failure rate of humans in these tasks is often much, much lower than that of LLMs, especially when you account for false positives which are usually ignored whenever someone touts numbers saying that an LLM did a job better than a human). But the people using the system naturally assume it means “More reliable in the way you expect automated systems to be reliable.”

    All of this creates a massive possibility for real, meaningful hazard. And all of this is before you even get into the specific ways in which LLMs fuck up, and how those fucks up are much more difficult to correct or control for. But thats a whole separate rant.


  • For the record the “change” from that deal would be, measured to any reasonable degree of accuracy, exactly one googol.

    It’s really really hard to explain how numbers that big work.

    Basically if you had ten trillion dollars, and you spent one single cent, you would have spent a greater proportion of your wealth than that fine would be as a proportion of one googol dollars.

    And just to really put all that in perspective, let’s talk about how big that fine actually is.

    It’s frequently said that it’s more than the entire world’s GDP, but that’s not even close. Imagine if every single planet (not “habitable planet”, just “planet”) in our galaxy - all eight trillion of them - was terraformed to support life. Imagine if all of them had a population and economy like Earth. The entire galaxy’s GDP wouldn’t be enough.

    In fact, a hundred of those galaxies wouldn’t be enough. A thousand wouldn’t be enough. A hundred thousand wouldn’t be enough. It would take 20 million of those galaxies to pay that fine (at the time of reporting; by now its more galaxies than exist in all the known universe, because it doubles every day).

    And all of that would still be a rounding error to a rounding error against one googol.



  • I want to point out that even if they are, the incentive is still having the desired effect, because in that scenario it makes it much more profitable to sell an EV than to sell an ICE vehicle, meaning the manufacturers are going to push the EV’s more. And given that incentive, they would still be strongly incentivized to price the EV’s in a, way that compares will with their ICE offerings, even if they could theoretically sell them cheaper.

    A big part of getting results is understanding how to turn greed to your advantage.





  • You’re missing the fact that a flatscreen TV will still often represent - as a portion of someone’s wealth - a far greater cost than a private jet would to a billionaire. Consider that most low income people are getting their cell phones on payment plans, whereas a multimillionaire can afford to buy a Lamborghini Gellardo out of pocket. On top of that, high end purchases like cars, yachts, houses, fine art, etc, often retain a lot of their resale value, turning them into investments in many cases, often reselling for more than their purchase price. So yes, I absolutely did account for the tax exemptions on “essentials”, and even when you factor those your sales tax only model still ends up being less onerous the more wealthy someone is.

    I also want to call out the unspoken implication that is often present with these theories - not accusing you of doing this, but it needs to be said - that items like phones, computers and TVs are extraneous luxuries that no poor person should ever own, as if enjoying a fulfilling life or engaging in relaxation are things that only the wealthy should be allowed to have access to.


  • No

    An investment contract exists if there is an “investment of money in a common enterprise with a reasonable expectation of profits to be derived from the efforts of others.”

    And just to be absolutely clear, many cryptocurrencies do not qualify as investments, and the government agrees. However there are numerous other regulations that the crypto industry apparently cannot handle, such as “Know Your Client” laws, which all financial institutions have to abide by, and which exist to prevent money laundering (Binance’s internal emails revealed that they knew perfectly well that their clients were using their service to facilitate crime, and they were perfectly happy with that).

    These are not bad faith regulations. They exist for good reasons, and there is absolute no good reason why the crypto industry shouldn’t also be subject to them. If these are currencies they should be regulated like currencies. If they are investments they should be regulated like investments.


  • That’s not what’s happening here. Microsoft management are well aware that AI isn’t making them any money, but the company made a multi billion dollar bet on the idea that it would, and now they have to convince shareholders that they didn’t epicly fuck up. Shoving AI into stuff like notepad is basically about artificially inflating “consumer uptake” numbers that they can then show to credulous investors to suggest that any day now this whole thing is going to explode into an absolute tidal wave of growth, so you’d better buy more stock right now, better not miss out.


  • There wasn’t a need to “define a new regulatory framework that actually fits” because, funnily enough, the existing regulatory framework already fits. It turns out, inventing new words doesn’t actually change the fundamental nature of the thing you’re describing. Refusing to call something an “investment” doesn’t change the fact that you’re selling an investment, refusing to call something a “security” doesn’t prevent it from being a security if it meets the definition.

    Edit: Sorry, let me address that ridiculous point about Coinbase “asking for clarity” directly. Yes, Coinbase repeatedly “asked for clarity” in the same manner as a dude in a girl’s DMs repeatedly asking for nudes while being told in the bluntest of terms to fuck off. They were given perfectly clear answers, they just didn’t like them, so they kept claiming, with zero fucking basis, that these will laid out rules that every financial institution has been following for decades were somehow “unclear” to them. It was a conversation not unlike a Sovereign Citizen trying to get out of a speeding ticket by claiming that they don’t understand where the officer’s authority comes from. The law is prefectly clear. If you don’t understand the law, you hire a lawyer who does. That’s a cost of doing business. Sticking “smart” in front the of the word “contract” doesn’t suddenly invent a whole new field of law. I can’t suddenly get away with murder because I call it “crypto murder”. The law is based on what you do, not what you call it.




  • Companies release free products to bring people into their ecosystem. If your company is already using Workstation Player, and now they’re looking for a Type 1 hypervisor, it makes sense to seriously consider ESXi. The idea especially is that you get smaller companies hooked on your free products early and then as they grow they buy more of your stuff rather than reconfigure their whole setup. You also get IT enthusiasts and home users to adopt, which gets you name recognition and builds familiarity. Then in the workplace those same users look to your brand as one to trust.

    For VMware, the problem is that they recently made a huge volley of deeply anti-consumer moves - basically told all their small customers to fuck off, and told their big customers to prepare to get fucked - and it really did not go the way they’d hoped. Turns out when you’re competing in a space where KVM, Hyper-V and XCP all exist, it’s actually not that difficult for customers to leave. So they did.

    This won’t directly help their bottom line but it’s presumably a sacrifice play to salvage their brand somewhat. Turns out when you tell people to fuck off, they tend to do just that.



  • You know what? Sure, fuck it, why not? I don’t even have a problem with OpenAI getting billions of dollars to do R&D on LLMs. They might actually turn out to have some practical applications, maybe.

    My problem is that OpenAI basically stopped doing real R&D the moment ChatGPT became a product, because now all their money goes into their ridiculous backend server costs and putting increasingly silly layers of lipstick on a pig so that they can get one more round of investment funding.

    AI is a really important area of technology to study, and I’m all in favour of giving money to the people actually studying it. But that sure as shit ain’t Sam Altman and his band of carnival barkers.