• 0 Posts
  • 94 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle

  • They have a secondary motherboard that hosts the Slot CPUs, 4 single core P3 Xeons. I also have the Dell equivalent model but it has a bum mainboard.

    With those 90’s systems, to get Windows NT to use more than 1 core, you have to get the appropriate Windows version that actually supports them.

    Now you can simply upgrade from a 1 to a 32 core CPU and Windows and Linux will pick up the difference and run with it.

    In the NT 3.5 and 4 days, you actually had to either do a full reinstall or swap out several parts of the Kernel to get it to work.

    Downgrading took the same effort as a multicore windows Kernel ran really badly on a single core system.

    As for the Sun Fires, the two models I mentioned tend to be highly available on Ebay in the 100-200 range and are very different inside than an X86 system. You can go for 400 or higher series to get even more difference, but getting a complete one of those can be a challenge.

    And yes, the software used on some of these older systems was a challenge in itself, but they aren’t really special, they are pretty much like having different vendors RGB controller softwares on your system, a nuisance that you should try to get past.

    For instance, the IBM 5000 series raid cards were simply LSI cards with an IBM branded firmware.

    The first thing most people do is put the actual LSI firmware on them so they run decently.


  • Oh, I get it. But a baseline HP Proliant from that era is just an x86 system barely different from a desktop today but worse/slower/more power hungry in every respect.

    For history and “how things changed”, go for something like a Sun Fire system from the mid 2000’s (280R or V240 are relatively easy and cheap to get and are actually different) or a Proliant from the mid to late 90’s (I have a functioning Compaq Proliant 7000 which is HUGE and a puzzlebox inside).

    x86 computers haven’t changed much at all in the past 20 years and you need to go into the rarer models (like blade systems) to see an actual deviation from the basic PC alike form factor we’ve been using for the past 20 years and unique approaches to storage and performance.

    For self hosting, just use something more recent that falls within your priceclass (usually 5-6 years old becomes highly affordable). Even a Pi is going to trounce a system that old and actually has a different form factor.






  • Even as far back as XP/Vista Microsoft has wanted to run the file system as more of an adaptive database than a classical hierarchical file system.

    The leaked beta for Vista had this included and it ran like absolute shit, mostly because harddrives are slow and ram was at a premium, especially in Vista as it was such a bloated piece or shit.

    NTFS has since evolved to include more and more of these “smart” file system components.

    Now they want to go full on with this “smart” approach to the filesystem.

    It’ll still be slow and shit, just like it was 2 decades ago.



  • Imho you’re wrong there.

    Amazon has every incentive to write down Twitches infrastructure cost as far higher than it needs to be, to make Twitch look unprofitable.

    Both to audience and shareholders. It’ll allow them to force more advertising and push up sub prices while making the main corporation revenue look better.

    This while the long term plan looks to be more about getting an excuse to shut down the public facing side of Twitch and get rid of having to deal with the streamers and viewers as direct clients and renting out streaming infrastructure to other streaming sites instead.

    They want to condense their streaming services to simply be simple products they can sell or rent out to other sites rather than having to deal with a load of consumers and legal liabilities that come with them.



  • Besides that, mrna tech started to be developed in the 1970’s with the first labrat trials in the late 80’s or early 90’s.

    Clinical trials on humans, to test their safety and effectiveness in combating various diseases and viruses have been ongoing for the past decade.

    And as you said, the first several widely used vaccines based on mrna tech have been deployed to literally billions of people.

    This is an incredibly gigantic sample size for data and there have been very few issues for the past 3 years.

    And what bernieecclestoned brings up about herd immunity simply means the people they are talking to are, like most antivaxxers, blithering idiots that know some catch phrases and not a single meaning behind them.

    You only obtain herd immunity with minimal casualties through hardening the herd with vaccines and then hope the immune systems of the herd adjust to further combat the disease. If data doesn’t show that new variants are easily countered by the immune systems of the herd, you know you need to develop more vaccines.

    If you try to obtain herd immunity by letting a brand new disease like COVID run its course, you will probably obtain it eventually, but instead of 7 million dead worldwide (and lord knows how many with long covid or other long term disabilities due to the disease), you’ll have 70 million or more.

    Herd immunity doesn’t mean you should just let shit hit the fan and see who’s left standing. If you miscalculate the severity of the disease, you can have another situation like with the plague where it killed over 25 million out of the 180 million people on earth.

    In todays numbers that would mean like 1.1 billion people die. Probably far more since we’re extremely more connected than people were in 400AD.

    And you’d think that the better general healthcare and hygiene these days would lessen it, but the sheer increase in how we’re connected would easily wipe that advantage off the board.