Sunday, February 15, 2026

CST370: Week 6 (Week 57)

This week we covered AVL trees, 2-3 trees, heap trees, and hashing. Hashing is a slight departure from the structures we've covered more recently since it's superficially represented as something other than a tree. That said, it's still a type of mapping construct/process. The rotation operations used in the addition, removal, and ordering/balancing operations of the trees was new. For AVL trees, the professor provided a YouTube video that was an excellent supplement to our recorded lecture. It helped clarify things for me.
https://www.youtube.com/watch?v=msU79oNPRJc

As an interesting footnote, I also liked the example approaches to confirming that a number is a prime number. I don't have a deep knowledge of cryptology and number theory but I have heard that prime numbers play a big role.
https://www.geeksforgeeks.org/cpp/c-program-to-check-prime-number/

As usual, I was curious about real world applications of the concepts introduced this week. I found that the red-black tree at the core of the Linux completely fair scheduler (CFS) is a type of 2-3 tree. I pointed this out in the class discord channel. It's been part of the Linux kernel since v2.6.
https://developer.ibm.com/tutorials/l-completely-fair-scheduler/

Tuesday, February 10, 2026

CST370: Week 5 (Week 56)

 This week we covered topological sorting, Kahn's algorithm, binary tree traversal, quick sort, and transform and conquer. We covered a broad range of algorithm concepts this week but to me there were two standouts. 1) I like that we coded an algorithm comparison as part of our assignment. Metrics and comparisons are a big part of the class so evaluating run time on the same system side by side is very helpful. I plan to try implementing a multithreaded vs single threaded example when I have time. 2) I enjoyed working on the King's Reach problem in the homework and on the quiz. It's a math focused puzzle and I enjoyed the process of breaking down the problem, recalling some of my algebra, and applying it in this context.

Sunday, February 1, 2026

CST370: Week 4 (Week 55)

 This week we covered merge sort and though we didn't have a programming assignment I took some time to explore it. I watched some videos suggested by my classmates and did my own search to find out about some of its real world applications. I'm a fan of low level code and embedded systems so I looked at the Linux kernel. As it turns out, the list_sort algorithm in the kernel which is used to sort linked lists, is based on merge sort. One difference from other applications is that they use an iterative approach and avoid recursion. The iterative approach makes for more predictable memory usage, better constrained stack usage, and cuts down on function call overhead.

Linux 6.17 list_sort

We also had our midterm this week. The process of preparing, and in my case probably over preparing, helped make the concepts we covered more concrete in my mind. Specifically, preparing my four page note sheet. With the pace of the classes, sometimes I worry that too much of what is taught might wind up in short term memory and might not serve us when we need it. I think the process of exam prep definitely helps avoid that possibility.

Tuesday, January 27, 2026

CST370: Week 3 (Week 54)

 This week we covered breadth first search, depth first search, brute force and exhaustive algorithms, and divide and conquer. I was curious about real world applications of BFS and DFS and learned that BFS is useful for GPS/mapping algorithms for finding the nearest instance of some destination. For example, finding the nearest gas station or restaurant. Since it walks outward from the start location, it guarantees an optimal solution. I learned that some forms of DFS are useful for dependency mapping for software trees like the Linux kernel. They also have clever ways of parallelizing operations. Bitbake builds the Linux kernel but it does it in multiple threads without stumbling and minimizing the time that any module has to wait for another module to build. I.e. it uses the build machine's resources efficiently.

I feel like brute force algorithms may be the way most of us initially approach a problem since it's so direct. That said, I think that divide and conquer algorithms are really interesting. I'm thinking that complex problems where d&c can be applied, they can really benefit from modern computer systems where multi-threading and multiprocessing are commonplace. I look forward to exploring merge sort next week.

Tuesday, January 20, 2026

CST370: Week 2 (Week 53)

 This week we covered O(n) and Θ(n) notation for analyzing and comparing algorithms. We looked at recursive, non-recursive, and brute force algorithms. It is interesting to look at the speed (time complexity) of some algorithms like recursive tree traversal versus their iterative counterparts. Some time ago I auto generated a numeric file to use as an experiment. I watched the stack grow in the debugger over the course of the traversal/print program execution and it was a good way of seeing why recursion is avoided in memory constrained applications. This section was excellent in that it's a good way of helping us establish useful metrics for algorithms especially in cases where the comparison isn't intuitive or obvious.

Tuesday, January 13, 2026

CST370: Week 1 (Week 52)

 This  week we eased into the material by reviewing fundamental data structures like (queues, stacks, linked lists). We also reviewed trees, graphs, and weighted/unweighted maps which I think will be a foundation for what's to come in this class. We had puzzles to solve as exercises which I feel is a good way of sharpening the skills that help us break down problems algorithmically. HW0 and HW1 were relatively straight forward and remind me of programming problems that I used to use as practice when preparing for interviews. I think that in the future a lot of these types of problems will be solved by AI but I think it's incredibly valuable for us, especially as computer scientists to have a strong foundation in solving these problems by hand.

Friday, August 15, 2025

CST334: Week 8 (Week 32)

 I learned a lot this semester but I had four major take aways.

1) I got a better understanding of what's happening between the userspace applications that we usually interact with and the underlying hardware (CPU, memory, storage, etc.). We don't always think about whether we are using a character device, a block device, or a network device. We don't think about whether it's interrupt driven or if it uses polling. We don't always think about how the browser window stays active while it simultaneously streams audio or what happens when we open a file on our local disk. This class gives perspective on the aspects of computers that we don't interact with directly.

2) I saw another aspect of computing where efficient algorithms are the centerpiece. Operating systems, like AI, like networking, and different search and pattern recognition technologies are very centered on well-developed algorithms. Especially when it comes to scheduling and caching systems.  I looked up some of the problems that people have solved and some that are in active research. There is a lot of interesting work being done and this class helps form a good foundation to at least have a fundamental understanding of it.

3) I saw how important it is for multiple things to be happening in a system at once and for specialized components to be used to drive performance. Concurrency and multithreading are critical in modern computing and this class did a good job covering it. For example, PA5 could lead you to imagine what it would be like if applications blocked while waiting for I/O or network operations to complete. On the hardware side performance boosts we get from purpose-specific hardware like MMUs and DMA controllers is something else that I think about. Multiple tasks, multiple threads, multiple processing components, all working in concert.

4) I see how important it is to recognize and consider trade-offs. For example, is there a sweet spot for the size of different layers of cache? Does it depend on the targeted application? A Xeon processor in a server or Core i9 in a laptop. Is it worth the cost for additional fast cache? Additional cache, cores, memory, etc come at the price of power consumption, heat, space, and an increase in price. This class helped expand  my understanding of those trade-offs.

CST370: Week 6 (Week 57)

This week we covered AVL trees, 2-3 trees, heap trees, and hashing. Hashing is a slight departure from the structures we've covered more...