![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
I liked Win2K, yes - then Linux :)
I liked Win2K, yes - then Linux :)
My trivial (non legal ;) answer is: If you are working for a corporation that is looking to patent something / make something closed license: the moment you ever looked at a single line of my code relevant to what you are doing, you are forbidden from releasing under any more restrictive license. If you are a private person working on open source? Then you be the judge whether you copied enough of my code that you believe it is more than just “inspired by”.
For most of history you would be better off if you could kill the next village over.
That is an incredibly stupid take. For most of history, the planet was so vast that people had plenty of room to hunt / farm / whatever. And no, killing other humans is not in our DNA, the only people who feel like that are those with brain damage / development defects.
again, I don’t have a problem with copying code - but I as a developer know whether I took enough of someone else’s algorithm so that I should mention the original authorship :) My only problem with circumventing licenses is when people put more restrictive licenses on plagiarized code.
And - I guess - in conclusion, if someone makes a license too free, so that putting a restrictive (commercial) license or patent on plagiarized / derived work, that is also something I don’t want to see.
As I am a big proponent of open source, there is nothing wrong even with copying code - the point is that you should not be allowed to claim something as your own idea and definitely not to claim copyright on code that was “inspired” by someone else’s work. The easiest solution would be to forbid patents on software (and patents altogether) completely. The only purpose that FOSS licenses have is to prevent corporations from monetizing the work under the license.
“Why does no one say murder is bad unless China is murdering”
I can not fathom how you absolutely nailed the essence of my comment, yet misunderstood it (and - arguably - your own example) so fundamentally.
Let me try to help, once:
“Why do most people not complain about murder when Microsoft is doing it, but when China is doing it, the very justified outrage can be heard?”
I am also sick to the core about this aspect of humanity. I feel that we as a species are just about developed enough to understand how a better world would look like, and how people should act, what’s “the right thing to do” - and very much not developed enough to overcome our egoism and narcissism to make it happen, so we do the wrong thing despite knowing better far too often.
With the obligatory “fuck everyone who disregards open source licenses”, I am still slightly amused at this raising eyebrows while nearly no one is complaining about MS using github to train their copilot LLM, which will help circumvent licenses & copyrights by the bazillion.
Also, the winners will interprete who gets to be the axis and who gets to be the allies in history books…
People who lobby with decision makers at major distributions for their software to be made the de-facto standard, instead of leaving it to the userbase, have a deeply anti-democratic mindset, and that makes them assholes.
“barely any” is neither entirely accurate, nor does it excuse the use of flatpaks.
That is indeed exactly my point. LLMs are just a language-tailored expression of deep-learning, which can be incredibly useful, but should never be confused for any kind of intelligence (i.e. logical conclusions).
I appreciate that you see my point and admit that it makes some sense :)
Example where I think pattern recognition by deep learning can be extremely useful:
But what I am afraid is happening for people who do not see why a very simple algorithm is already AI, but consider LLMs AI, is that they mentally decide to call AI what seems “AGI” / “human-like”. They mistake the patterns of LLMs for a conscious being and that is incredibly dangerous in terms of trusting the answers given by LLMs.
Why do I think they subconsciously imply (self-)awareness / conscience? Because to not consider as (very limited) AI a control mechanism like a simple room thermostat, is viewing it as “too simple” to be AI - which means that a person with such a view makes a qualitative distinction between control laws and “AI”, where a quantitative distinction between “simple AI” and “advanced AI” would be appropriate.
And such a qualitative distinction that elevates a complex word guessing machine to “intelligence”, that can only be made by people who actually believe there’s understanding behind those word predictions.
That’s my take on this.
I’m not blindly hating. I despise the asshole responsible for the choice being taken away from me for many major distros and I wish him the plague for his manipulative approach in getting there.
AI did boom, but people don’t realize the peak happened a year ago.
A simple control algorithm “if temperature > LIMIT turnOffHeater” is AI, albeit an incredibly limited one.
LLMs are not AI. Please don’t parrot marketing bullshit.
The former has an intrinsic understanding about a relationship based in reality, the latter has nothing of the likes.
systemd
and a giant “fuck you” to Lennart Poettering for that. Not for creating an init system option - but for lobbying it into major distributions, instead of letting the users decide what they prefer. May he forever stub his toes on furniture.
If the AI boom is a dud,
Whaddya mean, “if”? Emperor wears no clothes…
beyond root processes, none that I am aware of. Hence I configured all my internet applications and steam to run in a jail :) firejail & bubblewrap come as native packages, unlike the flatpak contents
isn’t flatpak by definition relying on a second software source, hence 2x as much risk as relying on a single source (your OS repo)?
Agreed, XP was the turning point - I decided I will never let such an intrusive software on my private computers, so I switched from Win2k to Linux.