“Strategy” implies he actually thinks about it. I think it’s just a reflex; fault belongs elsewhere, always. The man is incapable of critical thought, especially inward.
“Strategy” implies he actually thinks about it. I think it’s just a reflex; fault belongs elsewhere, always. The man is incapable of critical thought, especially inward.
You can very safely remove the “probably” from your first sentence.
I mean, there is a hard limit on how much info your brain can take in. It’s time. Every hour spent learning one thing is an hour not spent learning everything else.
Agreed. The solution to this is to stop using LLMs to present info authoritatively, especially when facing directly at the general public. The average person has no idea how an LLM works, and therefore no idea why they shouldn’t trust it.
My guess is that your name is so poorly represented in the training data that it just picked the most common kind of job history that is represented.
Yeah, exactly. The issue is precisely that it’s NOT just showing search results. MS’s software is generating libelous material and presenting it as fact.
Air Canada was forced to give a customer the compensation its chat bot made up. Germany/Europe in general is a bit stronger on public protections than Canada, so I’d expect MS would be held liable if this journalist decides to press a suit.
Bullshit generator generating bullshit, news at 11.
Others have pointed out the degradation issue, but you’re also assuming that all plastics are thermoplastics. They are not. There’s huge variation in chemical composition and material properties between different plastics, and most of them can’t be melted and reformed.
Well, I have good news! There’s a huge number of other systems out there, most of them are quite good, and there’s plenty of very interesting mechanical innovation going on. I encourage you to explore, D&D isn’t the only game in town. ;-)
Yeah, 4e doesn’t deserve the hate it gets. I found it much more mechanically engaging to play than 3.X or 5e.
4e was when WotC discovered D&D has a very large problem - it’s not allowed to change anything, for worse or better.
*Thinnest and yet roughest. Not thick enough to be a barrier, and it can rub you raw to provide an entry point at the same time!
“Sealed” is also a vague suggestion with HVAC. Every ducting join, every piece of equipment, all of it leaks. I shudder to think how much heating/cooling is wasted that way.
This. Satire would be writing the article in the voice of the most vapid executive saying they need to abandon fundamentals and turn exclusively to AI.
However, that would be indistinguishable from our current reality, which would make it poor satire.
The issue with “Human jobs will be replaced” is that society still requires humans to have a paying job to survive.
I would love a world where nobody had to do dumb labour anymore, and everyone’s needs are still met.
What part of “we paid these guys and they said we’re fine” do you not? Why would they choose and pay and release the results from a company they didn’t trust to clear them?
I’m not saying it’s rotten, but the fact that the third party was unilaterally chosen by and paid for LMG makes all the results pretty questionable.
It’s hard to trust a firm that is explicitly being paid by the company they’re investigating. I could be convinced that they are actually a neutral third party and that their investigation was unbiased if they had a track record of finding fault with their clients a significant portion of the time. (I haven’t done the research to see if that’s the case.)
However, you have to ask yourself - how many companies would choose to hire a firm which has that track record? Wouldn’t you pick one more likely to side with you?
The way to restore credibility is to have an actually independent third party investigation. Firm chosen by the accuser, perhaps. Or maybe something like binding arbitration. Even better, a union that can fight for the employees on somewhat even footing with the company.
The fundamental difference is that the AI doesn’t know anything. It isn’t capable of understanding, it doesn’t learn in the same sense that humans learn. A LLM is a (complex!) digital machine that guesses the next most likely word based on essentially statistics, nothing more, nothing less.
It doesn’t know what it’s saying, nor does it understand the subject matter, or what a human is, or what a hallucination is or why it has them. They are fundamentally incapable of even perceiving the problem, because they do not perceive anything aside from text in and text out.
It’s more like, “I own 17 homes and it wasn’t that hard to get that many. They must not be trying hard enough.”
If an LLM had to say “I don’t know” when it doesn’t know, that’s all it would be allowed to say! They literally don’t know anything. They don’t even know what knowing means. They are complex (and impressive, admittedly) text generators.
Did you tell him you guess you have to stop doing non-web development then? Clearly you’re not qualified if you can’t have the corresponding title.