Mama told me not to come.

She said, that ain’t the way to have fun.

  • 0 Posts
  • 1.26K Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle

  • Idk, I think his tech knowledge is fine. He knows far more about cameras than I ever will (largely because I don’t care), and I honestly haven’t seen anything where he’s lacking on the tech knowledge front. His reviews, when critical, are usually quite comprehensive. For his audience and the products he reviews, he’s plenty tech savvy and probably more tech savvy than most of his audience. He just doesn’t put that on display unless it’s relevant to the video.

    His channel is all about “hey, check out this cool tech gadget,” and not “let’s deep dive into this particular tech niche.” Do you want to know how a given EV is to drive? MKBHD got you. Are you trying to decide between EVs? Comparing MKBHD’s videos may help narrow it down, but probably isn’t sufficient. Do you want a teardown of an EV to repair something? Look elsewhere.

    I occasionally watch his videos, but not enough to sub. I like his presentation style and his critical videos are generally pretty insightful.





  • That depends, do you copy verbatim? Or do you process and understand concepts, and then create new works based on that understanding? If you copy verbatim, that’s plagiarism and you’re a thief. If you create your own answer, it’s not.

    Current AI doesn’t actually “understand” anything, and “learning” is just grabbing input data. If you ask it a question, it’s not understanding anything, it just matches search terms to the part of the training data that matches, and regurgitates a mix of it, and usually omits the sources. That’s it.

    It’s a tricky line in journalism since so much of it is borrowed, and it’s likewise tricky w/ AI, but the main difference IMO is attribution, good journalists cite sources, AI rarely does.





  • What we are talking about is the act of reading and/or learning and then using that information in order to synthesize new material.

    Sure, but that’s not what LLMs are doing. They’re breaking down works to reproduce portions of it in answers. Learning is about concepts, LLMs don’t understand concepts, they just compare inputs with training data to provide synthesized answers.

    The process a human goes through is distinctly different from the process current AI goes through. The process an AI goes through is closer to a journalist copy-pasting quotations into their article, which falls under fair use. The difference is that AI will synthesize quotations from multiple (many) sources, whereas a journalist will generally just do one at a time, but it’s still the same process.




  • I disagree that it needs to be explicit. The current law is the fair use doctrine, which generally has more to do with the intended use than specific amounts of the text/media. The point is that humans should know where that limit is and when they’ve crossed it, with motive being a huge part of it.

    I think machines and algorithms should have to abide by a much narrower understanding of “fair use” because they don’t have motive or the ability to Intuit when they’ve crossed the line. So scraping copyrighted works to produce an LLM should probably generally be illegal, imo.

    That said, our current copyright system is busted and desperately needs reform. We should be limiting copyright to 14 years (as in the original copyright act of 1790), with an option to explicitly extend for another 14 years. That way LLMs can scrape comment published >28 years ago with no concerns, and most content produced >14 years (esp. forums and social media where copyright extension is incredibly unlikely). That would be reasonable IMO and sidestep most of the issues people have with LLMs.