• 1 Post
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • Do you use autocomplete? AI in some of the various ways that’s being posited is just spicy autocomplete. You can run a pretty decent local AI on SSE2 instructions alone.

    Now you don’t have to accept spicy-autocomplete just like you don’t have to accept plain jane-autocomplete. The choice is yours, Mozilla isn’t planning on spinning extra cycles in your CPU or GPU if you don’t want them spun.

    But I distinctly remember the grumbles when Firefox brought local db ops into the browser to give it memory for forms. Lots of people didn’t like the notion of filling out a bank form or something and then that popping into a sqlite db.

    So, your opinion, I don’t blame you. I don’t agree with your opinion, but I don’t blame you. Completely normal reaction. Don’t let folks tell you different. Just like we need the gas pedal for new things, we need the brake as well. I would hate to see you go and leave Firefox, BUT I would really hate you having to feel like something was forced upon you and you just had to grin and bear it.


  • Donald Trump says random bullshit he cannot deliver on, that will appeal to his most rabid supporters.

    The Biden Tariffs have way more impact on total EVs sales than reducing the EV credit to $0. The tariff blocks $6k EVs from enter the US, the credits just knocks $7.5k off a $50k car.

    Like I get that us here in the United States are wary of Chinese things, but let’s all be honest, American car companies aren’t going to produce a sub $20k EV anytime soon. LiFePO₄ batteries have been a thing for some time now, the excuse the batteries are the main cost of the car is faux argument that US makers just aren’t ready to stop subsidizing their other vehicle platforms with EV sales.

    Trump’s arguments won’t do shit no matter which side of the aisle you sit. The thing killing wide adoption in the US of EVs is the US EV makers.


  • Roney Beal, 72, a Shamong, New Jersey resident

    This person has literally nothing else to do but hammer this lawsuit until their untimely passing.

    When Beal told them that she would call her lawyer, they told her to get out of the casino and to not return. The Beals were then escorted off of the property, Di Croce said.

    But this person is literally 72, the casino could hypothetically just wait them out.

    Di Croce hopes Bally’s wants to make this situation right with Beal. After suffering a heart attack last year, Beal turned to the casino for enjoyment.

    I mean and there’s a good chance the lawyers just drag for as long as they can. Odds are favorable for the casino winning on things just sorting themselves out naturally here.


  • There is a legal way to do this:

    New States may be admitted by the Congress into this Union; but no new State shall be formed or erected within the Jurisdiction of any other State; nor any State be formed by the Junction of two or more States, or Parts of States, without the Consent of the Legislatures of the States concerned as well as of the Congress

    — Article IV, Section 3, Clause 1

    Nebraska and South Dakota have a compact that’s been approved by Congress that has land swap between the states based on where the river is when particular assessments happen. So land leaving one state and going to another state isn’t unheard of. If you go look at NE and SD’s border in the southeast corner of SD, you’ll see the river and the border is pretty tight. Now compare that to states that have no such compact like Arkansas and Tennessee. River and the border are all kinds of messed up.

    The thing is, both Idaho’s and Oregon’s State assembly will have to vote on it as you indicated. It’s not up to the citizens to dictate when a state’s border can be redrawn. Once Idaho and Oregon have a compact, they will need to send it to DC for Congress to vote on it. If it passes both the House and the Senate, the new compact can be enforced and the new borders drawn.

    From what I’ve heard Oregon will not even begin to entertain this notion.

    But yes, this is completely legal in the Constitution and we’ve done it before too. And we even have had the case where we took one state and split it into two happen before as well. Virginia and West Virginia. So we’ve used this part of the Constitution enough to know exactly how it needs to go down.

    Is it going to go down? IDK. California said they were going to split up into 3, 4, 5 different States, not holding my breath on that one either. Would be pretty neat to redraw Idaho though. Never liked it’s weird long edge on the west side. Now it’ll look like someone giving the middle finger or something.



  • Yeah, I think that’s the bigger issue here. These devices pay their way by collecting data to sell off. What this “overhual” is indicating is that they haven’t quite figured out how to make these devices not only pay for themselves, but also, generate a net background profit for the company.

    The only thing I’m reading from this story is that Amazon is just aiming for more dollar signs from Alexia. I’m going tell you in the day and age of Siri and Whatever Google’s thing is, this is going to backfire massively on Amazon. This will likely collapse whatever paltry Alexia that’s out there. And I have a good feeling they’ll look at this collapse as “well the technology just isn’t a good money maker.” No you idiots, it’s not a mass profit driver. I get how something not drawing double digit percentage gains is a mystery to you all, but just because you cannot buy your fifteenth yacht from it, doesn’t mean that the technology is a failure.

    But it’s whatever, Amazon’s ship to wreck.


  • Quick things to note.

    One, yes, some models were trained on CSAM. In AI you’ll have checkpoints in a model. As a model learns new things, you have a new checkpoint. SD1.5 was the base model used in this. SD1.5 itself was not trained on any CSAM, but people have giving additional training to SD1.5 to create new checkpoints that have CSAM baked in. Likely, this is what this person was using.

    Two, yes, you can get something out of a model that was never in the model to begin with. It’s complicated, but a way to think about it is, a program draws raw pixels to the screen. Your GPU applies some math to smooth that out. That math adds additional information that the program never distinctly pushed to your screen.

    Models have tensors which long story short, is a way to express an average way pixels should land to arrive at some object. This is why you see six fingered people in AI art. There wasn’t any six fingered person fed into the model, what you are seeing the averaging of weights pushing pixels between two different relationships for the word “hand”. That averaging is adding new information in the expression of an additional finger.

    I won’t deep dive into the maths of it. But there’s ways to coax new ways to average weights to arrive at new outcomes. The training part is what tells the relationship between A and C to be B’. But if we wanted D’ as the outcome, we could retrain the model to have C and E averaging OR we could use things call LoRAs to change the low order ranking of B’ to D’. This doesn’t require us to retrain the model, we are just providing guidance on ways to average things that the model has already seen. Retraining on C and E to D’ is the part old models and checkpoints used to go and that requires a lot of images to retrain that. Taking the outcome B’ and putting a thumb on the scale to put it to D’ is an easier route, that just requires a generalized teaching of how to skew the weights and is much easier.

    I know this is massively summarizing things and yeah I get it, it’s a bit hard to conceptualize how we can go from something like MSAA to generating CSAM. And yeah, I’m skipping over a lot of steps here. But at the end of the day, those tensors are just numbers that tell the program how to push pixels around given a word. You can maths those numbers to give results that the numbers weren’t originally arranged to do in the first place. AI models are not databases, they aren’t recalling pixel for pixel images they’ve seen before, they’re averaging out averages of averages.

    I think this case will be slam dunk because highly likely this person’s model was an SD1.5 checkpoint that was trained on very bad things. But with the advent of being able to change how averages themselves and not the source tensors in the model work, you can teach new ways for a model to average weights to obtain results the model didn’t originally have, without any kind of source material to train the model. It’s like the difference between Spatial antialiasing and MSAA.


  • Okay for anyone who might be confused on how a model that’s not been trained on something can come up with something it wasn’t trained for, a rough example of this is antialiasing.

    In the simplest of terms antialiasing looks at a vector over a particular grid, sees what percentage it is covering, and then applies that percentage to to shade the image and reduce the jaggies.

    There’s no information to do this in the vector itself, it’s the math that is what is giving the extra information. We’re creating information from a source that did not originally have it. Now, yeah this is really simple approach and it might have you go “well technically we didn’t create any new information”.

    At the end of the day, a tensor is a bunch of numbers that give weights to how pixels should arrange themselves on the canvas. We have weights that show us how to fall pixels to an adult. We have weights that show us how to fall pixels to children. We have weights that show us how to fall pixels to a nude adult. There’s ways to adapt the lower order ranking of weights to find new approximations. I mean, that’s literally what LoRAs do. I mean that’s literally their name, Low-Rank Adaptation. As you train on this new novel approach, you can wrap that into a textual inversion. That’s what that does, it allows an ontological approach to particular weights within a model.

    Another way to think of this. Six finger people in AI art. I assure you that no model was fed six fingered subjects, so where do they come from? The answer is that the six finger person is a complex “averaging” of the tensors that make up the model’s weights. We’re getting new information where there originally was none.

    We have to remember that these models ARE NOT databases. They are just multidimensional weights that tell pixels from a random seed where to go to in the next step in the diffusion process. If you text2image “hand” then there’s a set of weights that push pixels around to form the average value of a hand. What it settles into could be a four fingered hand, five fingers, or six fingers, depends on the seed and how hard the diffuser should follow the guidance scale for that particular prompt’s weight. But it’s distinctly not recalling pixel for pixel some image it has seen earlier. It just has a bunch of averages of where pixels should go if someone says hand.

    You can generate something new from the average of complex tensors. You can put your thumb on the scale for some of those weights, give new maths to find new averages, and then when it’s getting close to the target you’re after use a textual inversion to give a label to this “new” average you’ve discovered in the weights.

    Antialiasing doesn’t feel like new information is being added, but it is. That’s how we can take the actual pixels being pushed out by a program and turn it into a smooth line that the program did not distinctly produce. I get that it feels like a stretch to go from antialiasing to generating completely novel information. But it’s just numbers driving where pixels get moved to, it’s maths, there’s not really a lot of magic in these things. And given enough energy, anyone can push numbers to do things they weren’t supposed to do in the first place.

    The way models that come from folks who need their models to be on the up and up is to ensure that particular averages don’t happen. Like say we want to avoid outcome B’, but you can average A and C to arrive at B’. Then what you need is to add a negative weight to the formula. This is basically training A and C to average to something like R’ that’s really far from the point that we want to avoid. But like any number, if we know the outcome is R’ for an average of A and C, we can add low rank weights that don’t require new layers within the model. We can just say, anything with R’ needs -P’ weight, now because of averages we could land on C’ but we could also land on A’ or B’ our target. We don’t need to recalculate the approximation of the weights that A and C give R’ within the model.








  • I am so sorry this got so long. I’m absolutely horrible at brevity.

    Applications use things called libraries to provide particular functions rather than implement those functions themselves. So like “handle HTTP request” as an example, you can just use a HTTP library to handle it for you so you can focus on developing your application.

    As time progresses, libraries change and release new versions. Most of the time one version is compatible with the other. Sometimes, especially when there is a major version change, the two version are incompatible. If an application relied on that library and a major incompatible change was made, the application also needs to be changed for the new version of the library.

    A Linux distro usually selects the version of each library that they are going to ship with their release and maintain it via updates. However, your distro provider and some neat program you might use are usually two different people. So the neat program you use might have change their application to be compatible with a library that might not make it into your distro until next release.

    At that point you have one of two options. Wait until your distro provides the updated library or the go it alone route of you updating your own library (which libraries can depend on other libraries, which means you could be opening a whole Pandora’s box here). The go it alone route also means that you have to turn off your distro’s updates because they’ll just overwrite everything you’ve done library wise.

    This is where snaps, flatpaks, and appimages come into play. In a very basic sense, they provide a means for a program to include all the libraries it’ll need to run, without those libraries conflicting with your current setup from the distro. You might hear them as “containerized programs”, however, they’re not exactly the Docker style “container”, but from an isolating perspective, that’s mostly correct. So your neat application that relies on the newest libraries, they can be put into a snap, flatpak, or appimage and you can run that program with those new libraries no need for your distro to provide them or for you to go it alone.

    I won’t bore you on the technical difference between the formats, but just mostly focus on what I usually hear is the objectionable issue with snaps. Snaps is a format that is developed by Canonical. All of these formats have a means of distribution, that is how do you get the program to install and how it is updated. Because you know, getting regular updates of your program is still really important. With snaps, Canonical uses a cryptographic signature to indicate that the distribution of the program has come from their “Snaps Store”. And that’s the main issue folks have taken with snaps.

    So unlike the other kinds of formats, snaps are only really useful when they are acquired from the Canonical Snaps Store. You can bypass the checking of the cryptographic signature via the command line, but Ubuntu will not automatically check for updates on software installed via that method, you must check for updates manually. In contrast, anyone can build and maintain their own flatpak “store” or central repository. Only Canonical can distribute snaps and provide all of the nice features of distribution like automatic updates.

    So that’s the main gripe, there’s technical issues as well between the formats which I won’t get into. But the main high level argument is the conflicting ideas of “open and free to all” that is usually associated with the Linux group (and FOSS [Free and open-source software] in general) and the “only Canonical can distribute” that comes with snaps. So as @sederx indicated, if that’s not an argument that resonates with you, the debate is pretty moot.

    There’s some user level difference like some snaps can run a bit slower than a native program, but Canonical has updated things with snaps to address some of that. Flatpak sandboxing can make it difficult to access files on your system, but flatpak permissions can be edited with things like Flatseal. Etc. It’s what I would file into the “papercut” box of problems. But for some, those papercuts matter and ultimately turn people off from the whole Linux thing. So there’s arguments that come from that as well, but that’s so universal “just different in how the papercut happens” that I just file that as a debate between container and native applications, rather a debate about formats.




  • “The mature and responsible thing to do would have been to add a content security policy to the page”, he wrote. “I am not mature so instead what I decided to do was render the early 2000s internet shock image Goatse with a nice message superimposed over it in place of the app if Sqword detects that it is in an iFrame.”

    I submit the Internet axiom of: there’s times and places for a measured and reasonable response, and the other times are funny af.

    Let this be a lesson to you—if you are using an iFrame to display a site that isn’t yours, even for legitimate purposes, you have no control over that content—it can change at any time. One day instead of looking into an iFrame, you might be looking at an entirely different kind of portal.

    Bravo.