Categories

Archives

Why The Shutdown?

The shutdown procedure of a computer is an important operation that should never be forgotten. The rule is: “never turn off a computer, always shut it down”.

But… what does that mean?

Whatever is the operating system you are using (Windows, Linux, OsX, BSD, …) you should never simply turn off your computer when you don’t need to use it anymore. The reason for that is the slowness of the hard disks.

File:Hard disk drive.JPG
Attribution: https://commons.wikimedia.org/wiki/File:Hard_disk_drive.JPG

No matter they are the traditional electromechanical HDD or the newer solid state SSD, they are all definitively slower than the CPU. Because of that, when you store or update something on the disk, the operating system doesn’t want to wait for the disk to finish the operation because the CPU would be forced to slow down. To avoid waiting, the OS puts all the information you need to store on the disk in a memory buffer, which lives in RAM. When there is nothing else to do, the OS goes to that buffer from time to time and actually saves those information on the disk.

File:Elixir M2U51264DS8HC3G-5T 20060320.jpg

But this means that what you thought was stored in the disk might still be in the RAM, and can stay there for long periods of time, before the OS decides to move the buffer contents to disk. And, even so, it might decide it does not have time to store everything, and so it stores just a little piece of the buffer, leaving the rest for later.

So, if you just turn off your computer, you may get into a condition where some of the buffers used by the OS have not been entirely or even fully transferred to the disk.

In the latter case, you will just loose whatever information was in that buffer.

But in the former case, when the buffer was only partially transferred to disk, you might end up with a corrupted file or, worst, with a corrupted disk, if you turned off the computer while the OS was trying to write something on the disk.

In this last condition, when you turn back on the computer, you may even end up not being able to boot it anymore, if you are really unlucky. The final result of this is that you would have to reinstall the OS entirely, and loose all the data that was on the disk, unless you are a computer expert and know how to retrieve the lost information or fix the corruption (you can never do that entirely).

File:Windows NT boot error.jpg

And that’s when the shutdown procedure comes to help you. If, instead of simply turning off the computer, you shut it down, the OS initiates a whole procedure that will prevent the problems above from happening.

File:Windows9xshutdown.jpg
Attribution: https://commons.wikimedia.org/wiki/File:Windows9xshutdown.jpg

The shutdown will first tell to all the programs that are still running on the computer to terminate on their own, after saving any data that is still pending. It will also run a procedure to empty all the buffers that were not yet stored to disk. And only after all of that is done it would turn off the computer on its own, without the need for you to push the off button.

Yes, the shutdown procedure will take some extra time to complete. But isn’t it well worth time? You should never forget to do that.

Now, the shutdown of a computer can be usually done in two different ways.

Way number 1: you can go to the menu of the desktop GUI and find the place where the shutdown button is located, them click on it and way for the computer to execute the procedure and then turn itself off after the procedure is completed.

Way number 2: use a command line window and issue the command manually. This is very common for servers that do not use a desktop GUI.

Computers running Windows and OsX always have a GUI and you don’t need to use the command line to do the shutdown. Other computers running instead Linux, or BSD, or other varieties of operating systems, may or may not have a GUI, depending if they were configured as workstations or servers.

For workstations, you can just find the shutdown button in the system menu of the GUI.

For servers that do not use the GUI, you will just issue the command manually, in which case you can provide several options to the command to make it do the shutdown in ways that are not even thinkable with the command issued from the GUI. But this is a story for another time.

Linux: GUI or CLI?

Many people ask: ” what is best to use in Linux, the graphical interface or the command line?

And, as always, the answer lies in the eye of the beholder.

It actually depends on the kind of user. Regular users, that use the computer for browsing the internet or create some document, found their easy way through the graphical interface, and that is perfectly fine. Graphical User Interfaces, or GUI, were created just for that: to make easy the use of a computer by the regular user.

However, if you are a power user that does system administration, you will stay away from the graphical interface and use the Command Line Interface, or CLI, instead. Why? Because the CLI allows you to do things that the GUI cannot. The point is that the GUI makes simple to use a subset of the CLI commands, but they are not all available, and certainly, for those that are available, it will not provide you with all the possible options. If it did, the GUI would become much more complicated to use and it was instead designed for simplicity, not to complicate things.

And if you are a programmer, you start walking a different kind of fine line. In fact, there are programmers that work with user interface, for the most, and for them it is nice to have a graphical interface to work with, possibly with a nice programming environment designed to create simple to modestly complex programs. But if you are one of those programmers that work on very complex systems, where you have to deal with thousand of files at once, then again, you shy away from the graphical interface and move back to the CLI, because that is the tool that allows you to do things that are unthinkable when working with GUI like, for example, find a few files in a multitude of other files, where you are not even sure what you are looking for exactly, or making simultaneous changes to a set of files, rather than editing them one by one.

Needs are different and, to satisfy all of them, Linux provides a number of different approaches to find your way in.

Nowadays, a novice Linux user starts with the GUI, because is simple to use and provides everything that he/she needs. Some of these users, over time, will discover the CLI and will try it. Some will decide that is not for them, and others will like it and stick with it. Some others will learn to use both, depending on what their needs of the moment are. And all of that is OK. That’s why there are so many ways to do the same thing with Linux. Not to confuse people, but to offer them the possibility to find the way they like best to do their things.

Memory Of The Past

You probably have a computer desktop or laptop with at least 4GB of RAM. Well, to have that amount of memory in the 50s, you would have needed the space of a warehouse.

In 1963, the Apollo module for the lunar exploration had computers on board with a total amount of 4kB of RAM. You read it right, no typos: 4kB! That memory was made out of ferrite modules. They could not put more because of the weight!

The Display and Keyboard.

Even in the late 1970s, when I was in college, the mainframe I had access to, to learn programming, had a central RAM of 2MB, which at the time was considered a huge amount. That memory was allocated along with the CPU in the same cabinet, of course, and that cabinet was about 5 cubic meters big. Those 2MB took most of the space inside the cabinet. (5 cubic meters is about 180 cubic feet).

Just to make you understand the sizes of the memory back then, here is a picture of a 128 bits module:

The actual size of this module is 50×50 mm or about 2×2 inches. You needed 8 of these modules to make 1kB. Not to mention the circuitry to power it up and use it.

It took a long time before the first RAM modules were made of transistors, but once that step was done, things started changing very fast.

integrated-circuit-electronics-421816.jpg

The Commodore VIC 20, that came out in 1980, was equipped with a whopping 5K of RAM, a lot for a computer for the masses. Before that, the PET Commodore was equipped with 4K of RAM, but that was originally designed to be an office computer. Also the famous Apple 2, which came out in 1977, had 4 K of RAM.

Commodore-VIC-20-FL.jpg

But that didn’t last long, and soon the first computers equipped with 64K started coming out. Remember the Commodore 64, the ZX81, and many others.

ZX81_Computer_Teardown_PCB.jpg

My first IBM compatible PC , back in 1990 was equipped with two memory modules for a total of 2MB, which I later expanded to 10MB. That was huge at the time.

ИСКРА 1030.11.jpg

And again, just a few years later, PCs equipped with 64MB and 128 MB became the norm. And today, where no one would buy a computer with less than 4GB!

Things have changed a lot. First slowly, and then faster and faster. The technology evolved exponentially and was able to give us more power, more speed, and more memory.

Memory module DDRAM 20-03-2006.jpg

Today having amounts of memory in the order of Gigabytes is a necessity, not anymore a luxury, and that because all today’s programs use a lot of graphics, which needs so much memory.

And it all started with some small rings of ferrite, that were manually assembled with thin wires going through them to allow electric current to magnetically polarize them one direction or the other, to represent those 1s and 0s.

Illiac II Modules.jpg