Lrrreconsilable ndndifferences
Fry wrote the comic saving Leela near the intro, foreshadowing him saving Leela later in the episode.
I am a Meat-Popsicle
Lrrreconsilable ndndifferences
Fry wrote the comic saving Leela near the intro, foreshadowing him saving Leela later in the episode.
Yeah, a company got toasted because one of their admins was running Plex and had tautulli installed and opened to the outside figuring it was read-only and safe.
Zero day bug in tat exposed his Plex token. They then used another vulnerability in Plex to remote code execute. He was self-hosting a GitHub copy of all the company’s code.
Home assistant Web app would be fine.
We’re a long way from trusting it to do something critical without intervention.
AI would be good at looking at an X-ray after a doctor and pointing out anomalies. But it would be bad to have it tell the doctor that everything looks fine.
Yeah, you still need the CPU to move all the data to the video card and to and from the memory. The stuff I play doesn’t mind 30 frames per second, I’m not really much of a stickler for high settings. But even the shitty unity games are starting to struggle
We have high standards for American Chinese food. There was this place where we used to live in the food was great. Not everything they made came out of a bag, and even the things that did come out of a bag had absolutely superior sauces. I don’t know exactly what they did but whatever it was it was better heads and tails than anything else around here.
We ordered our regular dishes one day. A few hours later we were exploding out of both ends. Was it them? was lunch? Who knows? We went about our regular business and two weeks later ordered the same regiment. A few hours later we again were exploding out of both ends.
The puking wasn’t all that bad but the raw acid diarrhea and the massive cramps were just insane.
This was a pretty bad scenario because of the time we lived in a house with one bathroom.
We never ordered from there again. They had this really great iced tea It took me ages to figure out how to replicate it. It ended up being like 14 to 1 regular sweetened black tea to Earl Gray, plus a splash of lemon.
China certainly could be lying.
Half of the US states are purposely bankrupting their education systems to make sure that the 1 percenters are the only ones with any advantage. Even in the States that aren’t actively trying to stamp out education the poor and middle class can’t afford a respectable education.
China is sitting on a pile of natural resources and doesn’t have any problems with underpaying and working people to death.
They’re set up to do a lot with very little, they have a lot of people and resources and they’re not afraid to educate enough people to get the job done.
It’s not just space, they’re getting places with electric cars that we can’t touch.
It’ll be interesting to see where all this ends up.
He wants in on the new authoritarian regime. Slowing down or stopping electric cars is on their to do list.
I keep a root folder. On Windows it’s in c:\something on Linux it’s in /something
Under there I’ve got projects organized by language. This helps me organize nix shells and venvs.
Syncthing keeps the code bases and synced between multiple computers
I don’t separate work from home because they don’t live in the same realm.
Only home stuff in the syncthing.
It tells me what document in the collection it used, But it doesn’t give me too much in the way of context or anything about the exact location in the document. It will usually give me some wording if I’m missing it and I can go to the document and search for that wording.
I’m just one person searching a handful of documents so the sample size is pretty small for repeatability, so far, if it says it’s in there, it’s in there. It definitely misses things though, I’m still early in the process. I need to try some different models and perhaps clean up the data a little bit for some of the stuff.
Using the documentation as source data It doesn’t seem to hallucinate or insist things are wrong, it’s more likely to say I don’t see any information about that when the data is clearly in the data set somewhere.
YW on the responses I’m having fun with it even if it’s taking forever to get it to dial in and be truly useful.
Trident VGA?
I got a 3DFX voodoo as soon as they came out. GL quake was mind-blowing.
I bought a Riva TNT
Then a GeForce 2
Then a Radeon 9000
Then for a bunch of years I just moved into laptop after laptop with discrete GPUs.
Now I still have a 1080 and a 2070 doing a little bit of light AI work and video transcoding for me. But I’m still relying on crappy laptop GPUs for all my gaming. They’re good enough.
I have two projects for it right now. The first is shoving my labyrinth of HOA documents into it so I can answer quick questions about the HOA docs or at least find the right answer more effectively.
The second is for work, I shoved a couple months of slack, some Google docs, some PDFs all about our production product. Next I’m going to start shoving some of GitHub in there. It would be kind of nice to have something that I could ask where is the shorting algorithm and how does it work and it could give me back where the source code is in any documentation related to it.
The HOA docs I could feed into GPT, I’m still a little apprehensive to handover all of our production code to a public AI though.
I’ve got it running on a 2070 super and I’ve got another instance running on a fairly new ARC. It’s not fast, But it’s also not miserable. I’m running on the medium sized models I only have so much VRAM to deal with. It’s kind of like trying to read the output off a dot matrix printer.
The natural language aspect is better than trying to shove it into a conventional search engine, say I don’t know what a particular function is called or some aspect or what the subcompany my HOA uses to review architectural requests. Especially for the work stuff when there’s so many different types of documents lying around. I still need to try some different models though my current model is a little dumb about context. I’m also having a little trouble with technical documentation that doesn’t have a lot of English fluff. It’s like I need it to digest a dictionary to go along with the documents.
Searx is fancy about it though, It queries everybody and gives you the results that came back from multiple places. This effectively eliminates ads, AI, and unless they all missed it, spam.
Using duck duck go is pretty good for me, if I go to bing.com, My results are horrible. Of course it’s the same result set, but I expect I’m getting less algorithmic shuffling on DuckDuckGo.
I got olama and WebUI working privately / locally and I’m able to insert documents into it with persistence and query them.
Oh give us a couple of decades to screw up the environment enough we can’t grow outside.
Unless he thinks he’s going to serve all that from the die in the next 5 years.
Running Ubuntu on my 2015 air I struggle to get 2 hours out of it. I was able to get TLP to bring it close to 4, But it was at the cost of being borderline unusable.
I just wish they’d take a strong stance on blocking propaganda in the US too.
Wonder how many services would have to pull out of Russia before Russia started pushing back on their government.
Probably preferential licensing. Black Mirror is still an active development with them.