Back
How Operating Systems Work · Part 1

What Your Computer Is Doing Right Now

Your machine is running hundreds of processes right now. Memory is being translated, interrupts are firing, the scheduler is switching contexts faster than you can blink. None of this appeared fully formed.

Right now, while you read this sentence, your computer is doing something it never stops doing.

A process — let's say your browser — just asked the kernel to read a few bytes from a socket. The CPU was executing browser code in user space, running normally, when it hit a single instruction: syscall. At that moment the CPU stopped what it was doing, raised its privilege level to ring 0, and jumped to a kernel function. The kernel read the data, copied it into memory the browser controls, then lowered privilege back to ring 3 and handed execution back to the browser — all before the next line of browser code ran. The browser never touched the network hardware. It asked; the kernel handled it.

That crossing happened thousands of times in the last second. On your machine. Right now.


Somewhere else in memory, a completely different process is running. A password manager, maybe, or a background sync daemon. It has its own address space — its own private view of memory that looks to it like it owns the entire machine. If it reads address 0x7fff1a2b3c4d, it gets its own data. If your browser reads that same address, it gets completely different data. Same virtual address. Different physical memory. The CPU's memory management unit is translating every address, on every memory access, for every process, continuously — mapping the illusion of private ownership onto the shared physical reality of your RAM.

Neither process knows the other exists. That isolation is not an accident. The kernel arranged it deliberately, and the hardware enforces it.


Your keyboard just registered a keypress.

The keyboard controller sent an electrical signal to the CPU's interrupt pin. The CPU, mid-instruction, stopped — not politely finished what it was doing, but stopped — and jumped to a kernel function registered to handle that interrupt. The handler read a scan code from the keyboard controller, converted it to a key event, queued it for the process that owns the focused window, and returned. The CPU went back to whatever it was doing before the interrupt arrived.

The whole thing took microseconds. The process waiting for the keypress didn't poll in a loop. It slept. The kernel woke it when the event arrived.

This is how hardware talks to software. Not by being asked. By interrupting.


Your machine right now is probably running somewhere between a hundred and five hundred processes. A typical laptop core can execute instructions for only one at a time. The scheduler makes them all appear simultaneous.

Every few milliseconds, a timer interrupt fires. The kernel's scheduler runs. It looks at every process waiting for CPU time and picks the next one. It saves everything about the process that was just running — its register values, its stack pointer, the address it was about to execute — and loads the same information for the process it's switching to. Then it returns, and the new process continues from exactly where it left off, unaware anything happened.

The gap between when a process pauses and when it resumes is too short to perceive. There is no perceivable gap. The illusion of parallelism on a single core is constructed entirely from these switches, happening constantly, invisibly, driven by a hardware timer the process has no access to.


Open a terminal and run:

ps aux

Every line is a process the kernel is currently tracking. Each one has a PID — a number the kernel assigned when it was created. Each one has a state. Most of them are sleeping: waiting for something — a keypress, a network packet, a timer — that hasn't arrived yet. A sleeping process uses no CPU. The scheduler simply skips it until whatever it's waiting for happens.

Now run:

cat /proc/$$/status

You're looking at the kernel's live record of your shell process. Its PID. Its parent's PID. Its memory usage. Its current state. The /proc filesystem isn't a real filesystem — there's no disk backing it. It's a kernel interface: every read from a /proc file causes the kernel to generate the data in real time from its internal data structures. You're reading the kernel's working memory, translated into text.


When you shut down the machine, the kernel runs this sequence in reverse.

At the top of every process tree is PID 1 — on most modern Linux systems, that's systemd, the ancestor of every process on the machine. PID 1 sends SIGTERM to every process it manages. That signal means: stop what you're doing, clean up, and exit. Most processes comply. The kernel waits. Any process still running after a timeout gets SIGKILL — a signal the process cannot catch or ignore, handled entirely by the kernel, which removes the process from the scheduler's run queue regardless of what it was doing.

Once processes are gone, filesystem journals flush pending writes to disk. Filesystems unmount. Device drivers release their hold on hardware. Interrupts are disabled. The scheduler stops. The CPU, already in ring 0, executes a halt instruction, and stops.

The machine that was managing hundreds of processes, translating millions of memory addresses, handling continuous hardware interrupts — goes dark.


None of this appeared fully formed.

Before any of it existed, there was a CPU executing instructions at a hard-coded address in firmware. No memory management. No processes. No files. No scheduler. No concept of ring 0 or ring 3. Just a CPU doing what CPUs do: fetching instructions and executing them, one at a time, starting from a fixed address burned into hardware.

Everything described above — the isolation, the boundary crossings, the interrupt handling, the scheduler's illusion of parallelism — had to be built. From that starting point.

The next article goes back to that moment.