each function has its own independent metal toggle switch
one steering wheel to steer left, and one to steer to the right
each function has its own independent metal toggle switch
one steering wheel to steer left, and one to steer to the right
they want to push a lot of buttons on those controls
LOL
Even with a lot of buttons available, good videogame controls are simple and narrow. Natural combinations add depth without overcomplicating things.
OS stands for “Oh Shit!”
Notably for CPU only. And on other platforms they already did.
Broadcom would like to clarify that while using KVM for the CPU virtualization, they will continue to rely on all of the existing VMware virtual devices for graphics and other functionality. Also on both macOS and Windows they have migrated to the native CPU virtualization frameworks.
I found it hard to follow despite C# being my main driver.
Using ref
, in the past, has been about modifiable variable references.
All these introductions, even when following C# changes across recent versions, were never something I actively used, apart from the occasional adding ref to structs so they can contain existing ref struct types. It never seems necessary.
Even without ref you use reference and struct types, where reference content can be modified elsewhere. And IDisposable
for object lifetimes with cleanup.
Have you considered creating a ticket called “Can’t ask questions without joining discord”?
Do you think it would have more answers if it were on GitHub discussions?
Release must be documented
It’s not a must [unless you put it into a contract], it’s a should or would be nice
Many, if not most, projects don’t follow a good, obvious, transparent, documented release or change management.
I wish for it, too, but it’s not the reality of projects. Most people don’t seem to care about it as much as I do.
I agree blind acceptance/merging is problematic. But for some projects (small scope/size/personal-FOSS, trustworthy upstream) I see it as pragmatic rather than problematic.
The follow-up quotes
In your specific case, the problem is your employer is on that list [of sanctioned entities]. If there’s been a mistake and your employer isn’t on the list, that’s the documentation Greg is looking for.
I would consider three four approaches.
1. Commit and push manually and deliberately
I commit changes early and often anyway. I also push regularly, seeing the remote as a safe and remote (as in backup) baseline and reference state.
The question would be: Do I switch when I’m still exploring things in the workspace, without committing when switching or moving away from it, and I would want those on the other PC? Then this would not be enough.
2. Auto-push all local git references into a separate space on the git remote
Git branches are refs, commit pointers, just like other refs are. And they can be put under arbitrary paths. refs/heads/
holds branches. I can replicate and regularly update all my branches under refs/pcreplica/laptop/*
. And then on the other PC, list or fetch those, individually, or all of them, regularly automatically, or manually.
git push origin refs/heads/*:refs/pcreplica/laptop/*
git ls-remote
git fetch origin refs/pcreplica/laptop/*:refs/laptop/*
3. Auto-push the/a local branch like you suggested
my concern here would be; is only one branch enough? is only the current branch enough?
4. Remoting into the other system
Are the systems both online? Can I remote into / connect into it when need be?
Has features ✅
we should just write the code how it should be
Notably, that’s not what he says. He didn’t say in general. He said “for once, [after this already long discussion], let’s push back here”. (Literally “this time we push back”)
who need a secure OS (all of them) will opt to not use Linux if it doesn’t plug these holes
I’m not so sure about that. He’s making a fair assessment. These are very intricate attack vectors. Security assessment is risk assessment either way. Whether you’re weighing a significant performance loss against low risk potentially high impact attack vectors or assess the risk directly doesn’t make that much of a difference.
These are so intricate and unlikely to occur, with other firmware patches in line, or alternative hardware, that there’s alternative options and acceptable risk.
Code before:
async function createUser(user) {
if (!validateUserInput(user)) {
throw new Error('u105');
}
const rules = [/[a-z]{1,}/, /[A-Z]{1,}/, /[0-9]{1,}/, /\W{1,}/];
if (user.password.length >= 8 && rules.every((rule) => rule.test(user.password))) {
if (await userService.getUserByEmail(user.email)) {
throw new Error('u212');
}
} else {
throw new Error('u201');
}
user.password = await hashPassword(user.password);
return userService.create(user);
}
Here’s how I would refac it for my personal readability. I would certainly introduce class types for some concern structuring and not dangling functions, but that’d be the next step and I’m also not too familiar with TypeScript differences to JavaScript.
const passwordRules = [/[a-z]{1,}/, /[A-Z]{1,}/, /[0-9]{1,}/, /\W{1,}/]
function validatePassword(plainPassword) => plainPassword.length >= 8 && passwordRules.every((rule) => rule.test(plainPassword))
async function userExists(email) => await userService.getUserByEmail(user.email)
async function createUser(user) {
// What is validateUserInput? Why does it not validate the password?
if (!validateUserInput(user)) throw new Error('u105')
// Why do we check for password before email? I would expect the other way around.
if (!validatePassword(user.password)) throw new Error('u201')
if (!userExists(user.email)) throw new Error('u212')
const hashedPassword = await hashPassword(user.password)
return userService.create({ email: user.email, hashedPassword: hashedPassword });
}
Noteworthy:
password
is. (In C# I would use a param label on call validatePassword(plainPassword: user.password)
which would make the interface expectation and label transformation from interface to logic clear.Structurally, it’s not that different from the post suggestion. But it doesn’t truth-able value interpretation, and it goes a bit further.
Being able to build the app as you are trying to do here is an issue we plan to resolve and is merely a bug.
So it really is that simple: a small bash script, building locally, rsync’ing the changes, and restarting the service. It’s just the bare essentials of a deployment. That’s how I deploy in 10 seconds.
I’m strongly opposed to local builds on any semi-important or semi-complex production product or system.
Tagged CI release builds give you a lot of important guarantees involved in release concerns.
I’ll take the fresh checkout and release build time cost for those consistency and versioned source state guarantees.
learned from 10 years/millions of users in production
10 years per millions of users is an interesting metric :P
I wasn’t aware the GitHub terms of service explicitly grant / require you to grant permission to fork [within GitHub].
GitHub ToS section License Grant to Other Users
By setting your repositories to be viewed publicly, you agree to allow others to view and “fork” your repositories (this means that others may make their own copies of Content from your repositories in repositories they control).
If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub’s functionality (for example, through forking). […] If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users.
Yeah, I thought the same. Pretty bad name.
deleted by creator
Maybe all bunnies are actually snails with a fur coat on.
looks like a multi-threading or concurrency issue