หมวดหมู่ของบทความนี้จะพูดถึงgoogle linux หากคุณต้องการเรียนรู้เกี่ยวกับgoogle linuxมาถอดรหัสหัวข้อgoogle linuxกับTravelingSpaceMuseumในโพสต์Google thinks Linux is slow to reboot, so they patched itนี้.

ภาพรวมของเอกสารที่เกี่ยวข้องกับgoogle linuxในGoogle thinks Linux is slow to reboot, so they patched itที่สมบูรณ์ที่สุด

ชมวิดีโอด้านล่างเลย

SEE ALSO  แชร์ไฟล์ word ทำงานร่วมกันออนไลน์ | เนื้อหาทั้งหมดที่เกี่ยวข้องกับรวม word ออนไลน์เพิ่งได้รับการอัปเดต

ที่เว็บไซต์travelingspacemuseum.orgคุณสามารถเพิ่มเนื้อหาอื่น ๆ นอกเหนือจากgoogle linuxได้รับความรู้ที่มีคุณค่ามากขึ้นสำหรับคุณ ที่เว็บไซต์Traveling Space Museum เราอัปเดตข้อมูลใหม่ ๆ ที่ถูกต้องให้คุณทุกวัน, ด้วยความปรารถนาที่จะให้ข่าวที่ดีที่สุดสำหรับคุณ ช่วยให้คุณเพิ่มข้อมูลออนไลน์ได้อย่างละเอียดที่สุด.

คำอธิบายบางส่วนที่เกี่ยวข้องกับหัวข้อgoogle linux

กล่อง Google linux มีไดรฟ์ NVMe SSD PCIe Express มากกว่า 16 ตัว เมื่อมีการส่งสัญญาณการปิดระบบไปยัง linux ระบบปฏิบัติการจะวนซ้ำผ่าน NVMe แต่ละรายการและส่งคำขอแบบซิงโครนัสเพื่อปิดระบบซึ่งใช้เวลา 4.5 วินาที ซึ่งจะเพิ่มขึ้นนานกว่าหนึ่งนาทีเพื่อให้รีบูตได้ Google แพทช์ linux ด้วย Shutdown API แบบอะซิงโครนัส ทรัพยากร แพตช์ Sync vs async Fundamentals ของหลักสูตร udemy วิศวกรรมฐานข้อมูล (ลิงก์เปลี่ยนเส้นทางไปยัง udemy พร้อมคูปอง) บทนำสู่ NGINX (ลิงก์เปลี่ยนเส้นทางไปยัง udemy พร้อมคูปอง) Python บนแบ็กเอนด์ (ลิงก์เปลี่ยนเส้นทางไปยัง udemy พร้อมคูปอง) สมัครสมาชิกบน YouTube 🔥 เนื้อหาเฉพาะสำหรับสมาชิก 🏭 วิดีโอวิศวกรรมแบ็กเอนด์ตามลำดับ 💾 วิดีโอวิศวกรรมฐานข้อมูล 🎙️ฟังอุปกรณ์และเครื่องมือเกี่ยวกับวิศวกรรมแบ็คเอนด์ที่ใช้ในช่อง (บริษัทในเครือ) 🖼️ การออกแบบสไลด์และภาพขนาดย่อ Canva Stay Awesome, Hussein .

SEE ALSO  วิชา Internet of things (IoT) บทที่2 การประยุกต์ใช้ใน Smart city | เนื้อหาการ ประยุกต์ ใช้ iotที่สมบูรณ์ที่สุด

รูปภาพบางส่วนที่เกี่ยวข้องกับหัวข้อของgoogle linux

Google thinks Linux is slow to reboot, so they patched it

นอกจากการหาข้อมูลเกี่ยวกับบทความนี้ Google thinks Linux is slow to reboot, so they patched it คุณสามารถค้นหาเนื้อหาเพิ่มเติมด้านล่าง

SEE ALSO  โครงงานวิทยาศาสตร์ประเภทการทดลอง (เมืองไทยสมาร์ทบุ๊ก) | โครง งาน ทดลองข้อมูลที่เกี่ยวข้องที่สมบูรณ์ที่สุด

รับชมเพิ่มเติมได้ที่นี่

คำหลักที่เกี่ยวข้องกับgoogle linux

#Google #thinks #Linux #slow #reboot #patched.

hussein nasser,backend engineering,linux,google,nvme.

Google thinks Linux is slow to reboot, so they patched it.

google linux.

เราหวังว่าเนื้อหาที่เราให้ไว้จะเป็นประโยชน์กับคุณ ขอบคุณมากสำหรับการติดตามgoogle linuxข่าวของเรา

37 thoughts on “Google thinks Linux is slow to reboot, so they patched it | ข้อมูลทั้งหมดเกี่ยวกับgoogle linuxเพิ่งได้รับการอัปเดต

  1. Luiz says:

    2:42 Linux is too synchronous… why don't they get rid of Unix signals once and for all, they don't actually even work that well. Ironically Windows got it right with their amazing IOCP implementation, the IO on Windows is 100% async all the times, there's not a single sync API in the kernel level, when the user thread waits for I/O, the thread is paused if it choose to be sync, otherwise it can pool with select, or just use WaitForObjects and stale (it goes to sleep and block) until the OS receives the async completion signal and wake up the thread. But there's never a kernel-side thread that's waiting for a signal, the IOCP dispatcher always runs and do DPC (deferred procedure call) dispatch.
    Ironically people criticized this design because it had latency if the CPU were slower than the IO decide, which is never for modern hardware…

    You could potentially have LibUV doing everything on Linux, but there's already a bad contention on the fact that SystemD uses a DBus for messaging. They absolute hate asynchronous code for some reason… Locking "threads" and waiting for nothing is so fun… (my mistake, linux don't schedule threads, only processes)

  2. Luiz says:

    1:00 That's not enough. Why optimizing a software stack ? why having a stack at all ? If I needed that much performance I would only use Unikernels specifically made for a single purpose software running in a core. There would be no time-sharing multi-user operating system, this is ludicrous.

  3. GOD KEK LIVES HERE says:

    This must be a joke right.
    I used a Ubuntu 21.10 as a double boot.
    And it open up much faster than freaking window 10.
    And by the way I have a 16 gb of ram ThinkPad x220 white a unsupported on windows 10 Intel core i5….
    Let this sink in.
    Ubuntu not only has all of my driver's for my graphics but it also open up in less in a min

  4. Pașca Alexandru says:

    But that is up to the init not the kernel to handle. Either sysv or systemd or a custom routine can have sequential and parallel services. It's just a matter of how the service is written.
    Some things need to be waited upon the entire system, some don't.

    BTW, if it's a reboot and not a power cycle issue why even bother? The drives flush themselves at pcie_init, the caches never lose power.

  5. True River says:

    One way to code it would be to start one thread for each device: and in each thread it issues a sync and that thread waits. Then wait for all threads. That might still cause multiple delays if you have more devices than available threads (!). It would work internally by the thread handler waiting for some kind of interrupt or semaphore from each thread — that may not be safe in a shutdown situation.

    Another way is to loop over all devices issuing the sync without waiting. Then loop over all devices issuing a sync and shutdown but this time wait for confirmation for each device.

    For some of the devices by the time the second loop gets to them they are fully synchronized, so they return almost instantly. That way the total time is only slightly more than the longest time taken by any device.

    That is going to be safer than any interrupt based method and almost as fast.

  6. Igor Dasunddas says:

    In general I'd imagine, that you send all the flush/shutdown requests and keep an open token for e.g. a maximum meaningful amount of seconds and either just log or even retry etc. Then you just wait for all of them to either complete or hit their timeout and retry-count, log the outcome and move on in the queue of shutdown tasks.

    I have never seen server with thousands of NVMe SSDs, but I'd think, that processing the queue would be something the processor does anyway – so performance should be there as there probably won't be much load on the CPU at this point anyway, right.

    I am a software engineer, but I haven't ever dealt with the Linux Kernel. I just imagined how I'd – theoretically – handle the asynchronous flush/shutdown requests.

  7. Top1 says:

    Sounds like "a obvious bug" to me. Like obviously these signals should be sent as events or whatever kernel is using instead of being blocking operations. I can only assume this hasn't been a major problem until now, but this kind of thing would have surely been fixed long ago if this was something that happened to more users.

  8. Hououin Kyouma says:

    Maybe the async calls limit depends on the processing capability of the box. I remember that one time I wrote a multi-threading app, I had to limit the parallel processes to the number of vCPUs in the host cuz it was taking too much memory otherwise. Not sure how much of this is applicable in linux reboots so I could be speaking of my ass too 🙂

  9. Travis Collier says:

    Fun factoid: NAND flash is very slow (relatively speaking) at deleting. That leads to some interesting and maybe not so intuitive ways the caching gets used.

    I was working on a system which had to write-out data at a pretty high rate, and pre-clearing was essential to get it to work. A side effect was that startup took ages.

  10. Зарисовки по АйТи says:

    Wondering if they could have synced the data in user space (which is easy to do in parallel) and then have the kernel shut down fast since all the data is written out… maybe it's a hack and fixing it once and for all at the kernel level is better.. just feels harder to debug if things don't go right :/

  11. Level Up says:

    Maybe just have a limit on how many asynchronous requests can be pending, so like set it to 32, or 64 which is still a big step up from 1 at a time.

    I've go no clue how much the linux kernel can actually handle at a time, maybe it can eaisly handle 1000 shut down requests & this wouldn't be a problem at all. Though I'm sure people smarter than us will figure it out rather quickly, this isnt a very difficult problem to solve.

  12. KoltTv says:

    I find it funny that for the sake of simplicity and out of habit we say "Google did this" but really it was a couple of probably very skilled, paid engineers that happen to work there.

  13. Capitalism Entertainment and Technologies says:

    Linux use to have slower shut down and reboot times than it needed to, canonical did a huge push in upstart to fix that and made it really fast. I was so happy my laptop starting shutting down fast. Then came systemD and slowly my shutdown times got longer and longer lol. Got speak on having 16 nvme drives lol

  14. Berin Loritsch says:

    In the cloud world, you want to be able to increase capacity dynamically. So the ability to scale on demand depends on how quickly you can bring a new machine on line… and shut down times can affect the overall cost of maintaining.

  15. h7hj59fh3f says:

    It's fascinating how something that's a minor inconvenience for most of us turns out to be a significant problem for Google. They're not measuring the downtime in seconds or minutes; they're measuring it cumulatively in hours and days. Every problem at Google is a problem at scale.

  16. Marc Gràcia says:

    Most Hardware is async by nature, so no you don't need that much of an infrastructure as for a "database" for example. Those drives for sure will get a comand by PCI and they will interrupt when done.
    So you just need a thread blocked by those interrupts, filling an array of responses, and returning when all results have been received (Or timed out).
    All this must be already working more or less like this, one just have to change how to return the values. Instead of blocking the caller until done, give him a "future" and just update it when done.
    (Is strange how much of the kernel was devoted to "synchronizing" async stuff.. and now we are trying to reverse it…. sign of the times.)

  17. Evert Chin says:

    again, even if you can only send 16 signals max per batch and wait until they all response OK (or KO) before you send another batch of 16 signals, you are still saving 14-16x time. but i doubt the signals limit is that low.. you can probly do a ton more.

    it also takes a long time to startup if you have a lots of drives… i wonder if google is going to address that as well. if there is anything in linux that just boot the critical drives first, get into the OS as soon as possible, then load the rest asynchronously instead trying to load all the drives during the startup. but then, it is probly can be solve with custom startup scripts…

ใส่ความเห็น

อีเมลของคุณจะไม่แสดงให้คนอื่นเห็น