Monday, July 21, 2025

CST334: Week 4 (Week 28)

 This week we learned about paging, swapping, caching, and translation look aside buffers. Though paging seems slightly more complex than swapping, it strikes me as a better approach if a system can only use one or the other. Segmentation seems more intuitive since it involved logical units that come up a lot in other classes outside of OS and compiler courses - code segment, stack segment, and heap segment. It focused on those segments, their bases, and their offsets. That said, this week we learned how paging uses a set block(page) size in virtual and physical memory addressing which applies to the entirety of the process's memory - code, stack, and heap. Having fixed page sizes can lead to some internal fragmentation but it reduces external fragmentation since by design, pages are sized to be small enough to minimize internal fragmentation but large enough for even a single page to be useful in some scenarios. Other than that, it takes practice to understand the virtual to physical address translation and being able to do it by hand but it comes to get quickly enough.

We also covered swapping which allows pages to be swapped between main memory and secondary storage (e.g. disks and flash). Since memory is expensive and is limited, since many processes can be present in memory at once, and since an entire process does not need to be in main memory for the entire time that it is running, the goal of swapping is to keep relevant data and code in main memory. When additional data is needed, it is swapped into main memory. Knowing how much slower RAM is than for example an NVMe SSD, ideally when configuring a system we want to pay attention to how much RAM we equip it with.

We also learned about caching. Cache is another type of volatile, on-chip memory. Faster than RAM  but not in the datapath like actual CPU registers. Depending on the algorithm data that has been accessed recently or is adjacent to recently accessed memory is kept in cache memory. Lastly, we learned about translation look aside buffers. TLBs store recently used virtual to physical memory address mappings. Otherwise, address translation has to be performed and that is a slower operation.

The last two weeks have been focused on process memory addressing, protection, speed, and sizing concerns. We only scratched the surface of this topic compared to the available implementations and the body of work on the subject but I feel like I now have a much better grasp of how it works.

No comments:

Post a Comment

CST370: Week 7 (Week 58)

 This week we covered non-comparison sorting, dynamic programming, Warshall's algorithm, Floyd's algorithm, Greedy Technique, and Pr...