• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • There was a ton of hairbrained theories floating around, but nobody had any definitive explanation.

    Well I was new to the company and fresh out of college, so I was tasked with figuring this one out.

    This checks out lol

    Knowing very little about USB audio processing, but having cut my teeth in college on 8-bit 8051 processors, I knew what kind of functions tended to be slow.

    I often wonder if this deep level understanding of embedded software/firmware design is still the norm in university instruction. My suspicion has been that focus moved to making use of ever-increasing SoC performance and capabilities, in the pursuit of making it Just Work™ but also proving Wirth’s Law in the process via badly optimized code.

    This was an excellent read, btw.





  • 1 - I get that light is flashed in binary to code chips but how does it actually fookin work ? What is the machine emmiting [sic] this light made up of ?

    This video by Branch Education (on YouTube or Nebula) is a high level explanation of every step in a semiconductor fab. It doesn’t go over the details of how semiconductor junctions work, though. That sort of device physics is discussed in this YouTube video by Ben Eater, “how semiconductors work”

    2 - How was program’s, OSs, Kernal [sic] etc loaded on CPU in early days when there were no additional computers to feed it those like today ?

    When the CPU powers up, typically the very first thing it starts to execute is the bootloader. Bootloaders will vary depending on the system, and today’s modern Intel or AMD desktop machines boot very differently to their 1980s predecessor. However, since the IBM PC laid the foundation for how most computers booted up for a nearly four decades, it may be instructive to see how it worked in the 80s. This WikiBook on x86 bootloading should be valid for all 32-bit x86 targets, from the original 8086 to the i686. It may even be valid further, but UEFI started to take off, which changed everything into a more modern form.

    But even before the 80s, computers could have a program/kernel/whatever loaded using magnetic tape, punch cards, or even by hand with physical switches, each representing one bit.

    But how does the computer decode this binary “machine code” into instructions to perform? See this video by Ben Eater, explaining machine instructions for the MOS 6502 CPU (circa 1975). The age of the CPU is not important, but rather that by the 70s, the basics of CPU operations has already been laid down, and that CPU is easy to explain yet non-trivial.

    3 - I get internet is light storing information but how ? Fookin HOW ?

    The mechanics of light bouncing inside a fibre optic cable is well-explained in this YouTube video by engineerguy. But for an explanation of how ones-and-zeros get converted into light to be transmitted, that’s a bit more involved. I might just point you to the Wikipedia page for fibre optic communications.

    How the data is encoded is important, as this has significant impact on bandwidth and data integrity, not just for light but for wireless RF transmission and wireline transmission. For wireless, this Branch Education video on Starlink (YouTube or Nebula) is instructive. And for wired, this Computerphile YouTube video on ADSL covers the challenges faced.

    Quite frankly, I might just recommend the entirety of the Computerphile channel, particularly their back catalogue when they laid down computer fundamentals.

    4 - How did it all come to be like it is today and ist it possible for one human to even learn how it all works or are we just limited one or two things ? Like cab we only know how to program or how to make hardware but not both or all ?

    As of 2024, the field is enormous, to the point that a CompSci degree necessarily has to be focused on a specific concentration. But that doesn’t necessarily mean the hard stuff like device physics are off-limits, leaving just stuff like software and AI. Sam Zeloof has been making homemade microchips, devising his own semiconductor process and posting it on YouTube..

    Specifically to your question about either software or hardware, the specialty of embedded software engineering requires skills with low-level software or firmware, as well as dealing with substantial hardware-specific details. People that write drivers or libraries for new hardware require skills from both regimes, being the bridge between Electrical Engineers that design the hardware, and software developers that utilize the hardware.

    Likewise, developers for high performance computers need to know the hardware inside-out, to have any chance of extracting every last bit (pun intended) of speed. However, these developers tend to rely upon documentation such as data sheets, rather than having to be keenly aware of how the hardware was manufactured. Some level of logical abstraction is necessary to tractably understand today’s necessarily large and complex systems.

    5 - Do we have to join Intel first or something to learn how most of the things work lol ?

    Nope! Often, you can look to existing references, such as Linux source code, to provide a peek at what complexities exist in today’s machines. I say that, but the Linux kernel is truly a monster, not because it’s badly written, but because they willingly take code to support every single bleeding platform that people are willing to author code for. And that means lots and lots of edge cases; there’s no such thing as a “standard” computer. X86 might be the closest to a “standard” but Intel has never quite been consistent across that architecture’s existence. And ARM and RISC-V are on the rise, in any case.

    Perhaps what’s most important is to develop strong foundations to build on. Have a cursory understanding of computing, networking, storage, wireless, software licenses, encryption, video encoding/decoding, UI/UX, graphics, services, containers, data and statistical analysis, and data exchange formats. But then pick one and focus on it, seeing how it interacts with other parts of the computing world.

    Growing up, I had an interest in IT and computer maintenance. Then it evolved into writing websites. Then into writing C++ software. Right before university, I started playing around with the Arduino’s Atmel 328p CPU directly, and so I entered uni as a Computer Engineer, hoping to do both software and hardware.

    The space is huge, so start somewhere that interests you. From the examples above, I think online videos are a fantastic resource, but so can blog posts written by engineers at major companies, as can talks at conferences, as can sitting in at university courses.

    Good luck and good studies!



  • I think this can be more generalized as: why do some people eschew anonymity online? And a few plausible reasons come to mind:

    • a convention carried over from the pre-Internet days to be honest and frank as one would be in-person
    • having no prior experience with anonymity or a basis to expect anonymity to last
    • they’re already a real-life edgelord and so the in-person/online distinction is artificial, or have an IDGAF attitude to such distinctions

    IMO, older people tend to have the first reason, having grown up with the Internet as a communication tool. Younger, post-2000 people might have the second reason, because from the events during their lifetime, privacy has eroded to the point it’s almost mythical. Or that it’s like the landed gentry, that you have to be highly privileged to afford to maintain anonymity.

    I have no thoughts as to the prevalence of the third reason, but I’m reminded of a post I saw on Mastodon months ago, which went something like this: every village used to have the village idiot, but was mostly benign because everyone in town knew he was an idiot. One moron in every 5 or 10 thousand people is fine. But with the Internet, all the village idiots can network with each other, expanding their personal communities and hyping themselves up to do things they otherwise wouldn’t have found support for.

    Coming back to the question, in the context above, maybe online anonymity is a learned practice, meaning it has to be taught and isn’t plainly natural. Nothing quite like the Internet has ever existed in human history, so what’s “natural” may just not have caught up yet. That internet literacy and safety is a topic requiring instruction bolsters this thought.



  • litchralee@sh.itjust.works
    cake
    toProgramming@programming.devRedis is no longer OSS
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    3 months ago

    There are two concepts at play here: open-source and free software. An early example of open-source is AT&T Research UNIX, which was made source-available (for a few) to universities for research purposes, who could recompile the code and use the binaries for that purpose. Here, the use of the software is restricted by the license terms.

    On the free software side, as a reimplementation if the Unix software utilities – ie all the programs like tar, ps, sh – GNU coreutils is GPL licensed, meaning any use of the compiled binaries is allowed, but there are restrictions on the distribution, of both source and binaries. As it turns out, GPL is both free and open-source (FOSS); there are fewer major examples of free but non-open source, but WinRAR and nVidia drivers on Linux would count.

    Specifically, GPL and other copyleft licenses require that if you distribute the binary, you must make the source available under the same terms. If you’ve made no changes, then this is as simple as linking to the public source code repo. If you did add or remove code, you must release those alongside the binaries. If you simply use the binaries internally, you don’t need to release anything at all, and can still use them for any internal purpose.

    wouldn’t GPL and other copyleft licenses be considered non-free as well since you are not free to do whatever you want with the source

    From the background above, free software has always been understood to mean the freedom to use software, not necessarily distribute it. GPL complies with that definition for using the software, but also enforced a self-perpetuating distribution requirement. Unlike plain ol free software, under GPL, you must redistribute source if you distribute the software for use (aka binaries), and you must make that source also GPL.


  • litchralee@sh.itjust.works
    cake
    toProgramming@programming.devRedis is no longer OSS
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    3 months ago

    Irrespective of debates on what the definition of “open source software is” or who gets to define it, it is very clear that the SSPL is not a FOSS – free and open source license – and that’s a shame. Sure, open source still means we can look at the source code, but we do not have the full freedoms to use the code for any purpose. You might retort “but I’m not a aaS provider” so my rights aren’t affected.

    But that’s the thing: the erosion of free software rights is never the end, but then beginning of the end. Much like free speech, such rights must be jealously guarded. Need I mention what happens when there’s no one left to speak up?

    That some users of Redis never contributed back to the project is beside the point: truly free software is free as in libre: if you want thanks for your work, release it as freemium or some other license. But a FOSS license like BSD-3 has always been thankless and the OSI is correct in calling out the SSPL for not meeting the OSI’s Open Software Definition’s anti-discrimination clause, nor the FSF’s zeroth freedom, amongst four.

    Free means free. AGPL is free. But SSPL carves out an exception, making it not free. No amount of sweet talking changes this reality.