One time I was biking and someone coming the other way smiled at me. There was nothing to the interaction beyond that few seconds but somehow it was just such a powerful smile it made my whole week and I still remember it.
One time I was biking and someone coming the other way smiled at me. There was nothing to the interaction beyond that few seconds but somehow it was just such a powerful smile it made my whole week and I still remember it.
I would say, whichever one is polling highest, who has signaled support in any way for the issue. Other things about them don’t matter because they aren’t going to win anyway. In this case a lot of third party votes for a single candidate are probably better than the same number of votes spread across different candidates because it looks more like an organized voting block to politicians looking at the numbers in retrospect, who are the reason to vote at all if you are voting third party, you are trying to communicate via those numbers about how your vote can be obtained or lost.
I thought it was the electrons that orbited
If it wasn’t clear, I am not claiming that AI is better than a person at summarizing complex information.
The AI summaries were judged significantly weaker across all five metrics used by the evaluators, including coherency/consistency, length, and focus on ASIC references. Across the five documents, the AI summaries scored an average total of seven points (on ASIC’s five-category, 15-point scale), compared to 12.2 points for the human summaries.
The focus on the (now-outdated) Llama2-70B also means that “the results do not necessarily reflect how other models may perform” the authors warn.
to assess the capability of Generative AI (Gen AI) to summarise a sample of public submissions made to an external Parliamentary Joint Committee inquiry, looking into audit and consultancy firms
In the final assessment ASIC assessors generally agreed that AI outputs could potentially create more work if used (in current state), due to the need to fact check outputs, or because the original source material actually presented information better. The assessments showed that one of the most significant issues with the model was its limited ability to pick-up the nuance or context required to analyse submissions.
The duration of the PoC was relatively short and allowed limited time for optimisation of the LLM.
So basically this study concludes that Llama2-70B with basic prompting is not as good as humans at summarizing documents submitted to the Australian government by businesses, and its summaries are not good enough to be useful for that purpose. But there are some pretty significant caveats here, most notably the relative weakness of the model they used (I like Llama2-70B because I can run it locally on my computer but it’s definitely a lot dumber than ChatGPT), and how summarization of government/business documents is likely a harder and less forgiving task than some other things you might want a generated summary of.
It can also be a solid rubber duck for debugging.
A lot of the time I get 3/4 of the way through writing a prompt and don’t bother hitting enter because I already figured it out. Great way to get your thoughts organized to have an incentive to put them down in writing.
If other people are also immortal, the awkwardness of all of them eventually becoming your exes
But I think the point is, the OP meme is wrong to try painting this as some kind of society-wide psychological pathology, when it’s rather business people coming up with simple reliable formulas to make money. The space of possible products people could want is large, and this choice isn’t only about what people want, but what will get attention. People will readily pay attention to and discuss with others something they already have a connection to in a way they wouldn’t with some new thing, even if they would rather have something new.
IIRC the story this is from is about a girl who is incapable of making the most basic decisions so an assistant was hired to whisper in her ear what she should be doing
I wrote off politics media as hyperbolic and manipulative propaganda in 2016 and I actively distance myself from it, so I’ve only seen the broad strokes of this current election cycle. Unless you honestly believe you are doing important activism work, give yourself permission to just chill out about politics. If your life is full of problems caused by politics such that it’s impossible for you to chill out about politics, you have my sympathy.
that is not the … available outcome.
It demonstrably is already though. Paste a document in, then ask questions about its contents; the answer will typically take what’s written there into account. Ask about something you know is in a Wikipedia article that would have been part of its training data, same deal. If you think it can’t do this sort of thing, you can just try it yourself.
Obviously it can handle simple sums, this is an illustrative example
I am well aware that LLMs can struggle especially with reasoning tasks, and have a bad habit of making up answers in some situations. That’s not the same as being unable to correlate and recall information, which is the relevant task here. Search engines also use machine learning technology and have been able to do that to some extent for years. But with a search engine, even if it’s smart enough to figure out what you wanted and give you the correct link, that’s useless if the content behind the link is only available to institutions that pay thousands a year for the privilege.
Think about these three things in terms of what information they contain and their capacity to convey it:
A search engine
Dataset of pirated contents from behind academic paywalls
A LLM model file that has been trained on said pirated data
The latter two each have their pros and cons and would likely work better in combination with each other, but they both have an advantage over the search engine: they can tell you about the locked up data, and they can be used to combine the locked up data in novel ways.
Ok, but I would say that these concerns are all small potatoes compared to the potential for the general public gaining the ability to query a system with synthesized expert knowledge obtained from scraping all academically relevant documents. If you’re wondering about something and don’t know what you don’t know, or have any idea where to start looking to learn what you want to know, a LLM is an incredible resource even with caveats and limitations.
Of course, it would be better if it could also directly reference and provide the copyrighted/paywalled sources it draws its information from at runtime, in the interest of verifiably accurate information. Fortunately, local models are becoming increasingly powerful and lower barrier of entry to work with, so the legal barriers to such a thing existing might not be able to stop it for long in practice.
The OP tweet seems to be leaning pretty hard on the “AI bad” sentiment. If LLMs make academic knowledge more accessible to people that’s a good thing for the same reason what Aaron Swartz was doing was a good thing.
a few dozen, mostly hexbear users. Though that was mostly from when I started using Lemmy, I haven’t felt the need to block anyone in a long time. My list of blocked communities is much larger.
A text message app with a keyword blocking feature is very useful to have
can’t see correlation without social agenda—theyre just two very different things. Science and agenda; or agenda using “science”. It’s bias. That’s very unscientific.
The idea is that the place the OP meme is coming from is likely a belief that science and agenda are not different things and rather are inseparable. It is very unscientific, it’s a fundamentally anti-intellectual attitude.
I think you’re reading statement B too literally. I’m pretty sure the idea behind it is related to critical theory and is an objection to the idea that rationality is trustworthy and that class conflict should be regarded as a higher truth. In that way statement B is relevant to statement A; it’s an implicit rejection of it.
I bought a large capacity unknown brand cheap SD card somewhat recently, it seemed real at first but after installing an OS on it and running a few minutes became bricked somehow. At least I got a refund.
It’s more that society should have asked nicely instead of trying to manipulate me into it with years of brainwashing and coercive economics, so I made it a priority to participate as little as possible and that’s on them. Want a functional system, treat people with respect.