Operating system is a core topic in computer science and it is one of those topics every software developer must have a basic understanding of it works. Operating system is aprogram that acts as an intermediary between a user of a computer and the computer hardware.
There are many experts answer with huge explanations for the question about operating system. I will try to explain it as simple as possible.
Assume hardware as a car and operating system as a driver and yourself as the passenger. You are just going to tell him the destination and the driver knows how to start, when to accelerate, when to change the gear, when to apply brakes, how to reach destination faster.
This is what actually happens, you just instruct the computer to do something through user-interface of the application and the operating system know how to start the process, what are the resources required, how to use ram efficiently, how to complete the process faster, etc.
Main functions of Operating System are,
- managing the resources (memory, registers, I/O devices) efficiently.
- allocating the processor for different processes.
- reading and writing data from memory.
- managing application software’s and resolving if some error occurs during process.
So you may have a Lamborghini (high specification hardware) but without driver (Operating System) it is just a show piece.
Kernel the most important part of the software collection called OS. It is the program that does all the heavy lifting in an operating system. It handles the hardware, timing, peripherals, memory, disks, user access and everything that you do on a computer. It decides when a software should run and what should be the permissions given to it. It makes sure that no program access memory that do not belong to it and that no program causes other programs to crash. It divides the time up for processes to use the processor and almost a hundred more duties.
Then we have something called the shell. A shell is the part of an OS that you as a user interact with. It is a interface created so that a user can communicate with the Kernel. Command line and GUI shells exist. The combination of these and few other software are together called the OS
Kernel is the one program running at all times on the computer (without it, you wouldn’t have an OS)
Firmware, operating system, kernel… these are some common terminologies that an embedded system developer runs in to — a lot. For explaining clearly the difference between these, lets first explain the components in an embedded device or even a general purpose computer.
For simplicity, we can imagine a computer as a system consisting of CPU and a variety of peripheral devices. All the user programs run on CPU which a powerful general purpose execution engine, and the function that it does is totally defined by the program being executed. Now consider the case of peripheral devices, they will have a dedicated function, such as:
- eMMC or similar secondary storage — managing of blocks, interpreting eMMC command sent from host, controlling the read-write operations etc.
- Touch screen controller — to interpret commands from host; read the x & y coordinates where user has touched; send interrupt to host on identifying touch event etc.
- GPUs (available in in System-on-Chips — SoCs) — manage the command ring-buffer between host CPU and GPU; interpret and execute the commands send from host via ring buffer; handle the buffers mapped; signalling completion of command execution(fences) etc.
- BIOS (legacy) — to start, configure and initiate the execution of boot loader from boot device when computer is started.
For providing or fulfilling these functionality, the peripheral may have a small micro-controller(in case of ones with TSP controller, eMMC controller etc.) — in case of GPU it will be multi-core execution engine. All of which will have a CPU of varying capability.
As it is with all CPUs, it requires instructions to execute and perform its required function. It doesn’t need capabilities like :
- advanced interrupt handling,
- segmentation & paging, etc.
Hence it will be a very minimal software package just enough to complete its intended functionality — which is called a FIRMWARE.
- usually resides in ROM(Read-only-memory) in case of self-contained devices. They will start execution in-place(from ROM itself by the micro-controller) when device is powered on.
- Sometimes, it might be loaded to a small RAM — if device/controller is having one
- Is usually frozen during production, but in some cases, it can be upgraded via special tools (if it is burned in to EEPROM or NOR flash)
- There are cases in which the firmware resides in other secondary storage devices(Hard disk, SD card, etc.). The Host CPU will load the firmware on the peripheral device via some BUS(I2C, SPI etc.) after power-on & reset of peripheral devices(happens during boot up). After which firmware will start execution in the peripheral device. This happens in most of the modern GPUs embedded in SoCs Upgrading of firmware is quite straight forward in these kind of systems.
An Operating System is on the other hand
- Provides a Kernel to interact with the underlying hardware components & associated userspace (programs, libc, libraries etc.) for user.
- It provide (via kernel) standard functionality like multi-user, multi-programming, security, device drivers etc. which is needed to exploit the underlying hardware.
Basically firmware, kernel, operating system are abstractions intended at various applications.
The main function of the “interrupt”, this makes the multiprogramming possible. The CPU is always much more quicker, than the input / output devices. So when you start an I/O operation, you don’t have to wait the finish, you can do other work meanwhile. When I/O has finished, sends an interrupt signal, and program counter gets the address of an interrupt handler routine. What happens, if there is no I/O operation? For this case there is a special device, the “interval timer”. This time to time gives an interrupt. The operating system handle this parallel operations. There are different algorithms for this. And a nice field of systems programming. Also there are software interrupts (exceptions) in case some critical software event. ( for example divide by zero, or hurt memory protection, etc. )
An interrupt in an operating system is a kind of event generated either internally or externally that triggers a specific sequence of events. It might be easier to explain on a smaller level:
Hardware interrupts come from outside the operating system. On a computer it may come from a mouse click, hard disk, or even some form of failure. These are called Asynchronous interrupts because it doesn’t necessarily happen after another operation on the computer has finished and isn’t reliant on the computer’s timing. An asynchronous interrupt can happen at any time. On our small system, let’s say you have a microcontroller with one button and one LED attached. You have an interrupt that will turn on and off the LED set to trigger when that button is pushed.
When the button is pushed, it will set a flag somewhere in the microcontroller that essentially sends a direct message to the microcontroller that says “Hey, an event happened!” and it gives the location of event, in this case it is the input that the button is. That location points to a specific place in program memory where the actions that you want to happen occur. In this case, the location at which the Interrupt Vector points is a function that toggles the LED on or off.
This jump can happen anywhere in the program and when it does, the computer writes to memory somewhere the exact spot it was in so it can return to whatever it was doing after the interrupt occurred.
A synchronous interrupt — or software interrupt (also sometimes called an ‘exception’) — happens when the processor faces some strange or otherwise unexpected behaviour when processing the software. These are considered syncronous because they occur within the timing of the computer and are not dependent on any eternal sources to occur.
Interrupt Handling :
- The operating system preserves the state of the CPU by storing registers and the program counter
- Incoming interrupts are disabled (prevent lost interrupts)
- Which device?
Polling (no identity known)
Vectored (sends identity along)
- Separate segments of code determine what action should be taken for each type of interrupt
Interrupt Timeline :
Operating System Structure
An operating system might have many structure. According to the structure of the operating system; operating systems can be classified into many categories.
Some of the main structures used in operating systems are:
- Simple Structure
It is the oldest architecture used for developing operating system. Only one or two levels of code. When DOS was originally written its developers had no idea how big and important it would eventually become. It was written by a few programmers in a relatively short amount of time, without the benefit of modern software engineering techniques, and then gradually grew over time to exceed its original expectations. It does not break the system into subsystems, and has no distinction between user and kernel modes, allowing all programs direct access to the underlying hardware. ( Note that user versus kernel mode was not supported by the 8088 chip set anyway, so that really wasn’t an option back then. )
The original UNIX OS used a simple layered approach, but almost all the OS was in one big layer, not really breaking the OS down into layered subsystems:
2. Layered Architecture of operating system
The layered Architecture of operating system was developed in 60’s in this approach, the operating system is broken up into number of layers. The operating system is divided into a number of layers (levels), each built on top of lower layers. The bottom layer (layer 0) is the hardware, the highest (layer N) is the user interface.With modularity, layers are selected such that each uses functions (operations) and services of only lower-level layers.
The first layer can be debugged without any concern for the rest of the system. It user basic hardware to implement this function once the first layer is debugged., it’s correct functioning can be assumed while the second layer is debugged & soon . If an error is found during the debugged of particular layer, the layer must be on that layer, because the layer below it already debugged. Because of this design of the system is simplified when operating system is broken up into layer.
- Another approach is to break the OS into a number of smaller layers, each of which rests on the layer below it, and relies solely on the services provided by the next lower layer.
- This approach allows each layer to be developed and debugged independently, with the assumption that all lower layers have already been debugged and are trusted to deliver proper services.
- The problem is deciding what order in which to place the layers, as no layer can call upon the services of any higher layer, and so many chicken-and-egg situations may arise.
- Layered approaches can also be less efficient, as a request for service from a higher layer has to filter through all lower layers before it reaches the HW, possibly with significant processing at each step.
Os/2 operating system is example of layered architecture of operating system another example is earlier version of Windows NT.
The main disadvantage of this architecture is that it requires an appropriate definition of the various layers & a careful planning of the proper placement of the layer. Lower levels independent of upper levels.
OS built from many user-level processes.
- The basic idea behind micro kernels is to remove all non-essential services from the kernel, and implement them as system applications instead, thereby making the kernel as small and efficient as possible.
- Most microkernels provide basic process and memory management, and message passing between other services, and not much more.
- Security and protection can be enhanced, as most services are performed in user mode, not kernel mode.
- System expansion can also be easier, because it only involves adding more system applications, not rebuilding a new kernel.
- Mach was the first and most widely known microkernel, and now forms a major component of Mac OSX.
- Windows NT was originally microkernel, but suffered from performance problems relative to Windows 95. NT 4.0 improved performance by moving more services into the kernel, and now XP is back to being more monolithic.
- Another microkernel example is QNX, a real-time OS for embedded systems.
Core kernel with Dynamically loadable modules.
- Modern OS development is object-oriented, with a relatively small core kernel and a set of modules which can be linked in dynamically. See for example the Solaris structure, as shown in Figure 2.13 below.
- Modules are similar to layers in that each subsystem has clearly defined tasks and interfaces, but any module is free to contact any other module, eliminating the problems of going through multiple intermediary layers, as well as the chicken-and-egg problems.
- The kernel is relatively small in this architecture, similar to microkernels, but the kernel does not have to implement message passing since modules are free to contact each other directly.
5. Hybrid System
Most OSes today do not strictly adhere to one architecture, but are hybrids of several.
- The Max OSX architecture relies on the Mach microkernel for basic system management services, and the BSD kernel for additional services. Application services and dynamically loadable modules ( kernel extensions ) provide the rest of the OS functionality:
- The iOS operating system was developed by Apple for iPhones and iPads. It runs with less memory and computing power needs than Max OS X, and supports touchscreen interface and graphics for small screens:
- The Android OS was developed for Android smartphones and tablets by the Open Handset Alliance, primarily Google.
- Android is an open-source OS, as opposed to iOS, which has lead to its popularity.
- Android includes versions of Linux and a Java virtual machine both optimized for small platforms.
- Android apps are developed using a special Java-for-Android development environment.
Operating system components
An operating system provides the environment within which programs are executed. To construct such an environment, the system is partitioned into small modules with a well-defined interface. The design of a new operating system is a major task. It is very important that the goals of the system be will defined before the design begins. The type of system desired is the foundation for choices between various algorithms and strategies that will be necessary. A system as large and complex as an operating system can only be created by partitioning it into smaller pieces. Each of these pieces should be a well defined portion of the system with carefully defined inputs, outputs, and function. Obviously, not all systems have the same structure. However, many modern operating systems share the system components outlined below.
- Process management
The CPU executes a large number of programs. While its main concern is the execution of user programs, the CPU is also needed for other system activities. These activities are called processes. A process is a program in execution. Typically, a batch job is a process. A time-shared user program is a process. A system task, such as spooling, is also a process. For now, a process may be considered as a job or a time-shared program, but the concept is actually more general.
In general, a process will need certain resources such as CPU time, memory, files, I/O devices, etc., to accomplish its task. These resources are given to the process when it is created. In addition to the various physical and logical resources that a process obtains when its is created, some initialization data (input) may be passed along. For example, a process whose function is to display on the screen of a terminal the status of a file, say F1, will get as an input the name of the file F1 and execute the appropriate program to obtain the desired information.
We emphasize that a program by itself is not a process; a program is a passive entity, while a process is an active entity. It is known that two processes may be associated with the same program, they are nevertheless considered two separate execution sequences.
A process is the unit of work in a system. Such a system consists of a collection of processes, some of which are operating system processes, those that execute system code, and the rest being user processes, those that execute user code. All of those processes can potentially execute concurrently.
The operating system is responsible for the following activities in connection with processes managed.
- The creation and deletion of both user and system processes
- The suspension are resumption of processes
- The provision of mechanisms for process synchronization
- The provision of mechanisms for deadlock handling
2. Memory Management
Memory is central to the operation of a modern computer system. Memory is a large array of words or bytes, each with its own address. Interaction is achieved through a sequence of reads or writes of specific memory address. The CPU fetches from and stores in memory.
In order for a program to be executed it must be mapped to absolute addresses and loaded in to memory. As the program executes, it accesses program instructions and data from memory by generating these absolute is declared available, and the next program may be loaded and executed.
In order to improve both the utilization of CPU andthe speed of the computer’s response to its users, several processes must be kept in memory. There are many different algorithms depends on the particular situation. Selection of a memory management scheme for a specific system depends upon many factor, but especially upon the hardware design of the system. Each algorithm requires its own hardware support.
The operating system is responsible for the following activities in connection with memory management.
- Keep track of which parts of memory are currently being used and by whom.
- Decide which processes are to be loaded into memorywhen memory space becomes available.
- Allocate and deallocate memory space as needed.
3. Secondary Storage Management
The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be in main memory during execution. Since the main memory is too small to permanently accommodate all data and program, the computer system must provide secondary storage to backup main memory. Most modem computer systems use disks as the primary on-line storage of information, of both programs and data. Most programs, like compilers, assemblers, sort routines, editors, formatters, and so on, are stored on the disk until loaded into memory, and then use the disk as both the source and destination of their processing. Hence the proper management of disk storage is of central importance to a computer system.
There are few alternatives. Magnetic tape systems are generally too slow. In addition, they are limited to sequential access. Thus tapes are more suited for storing infrequently used files, where speed is not a primary concern.
The operating system is responsible for the following activities in connection with disk management
- Free space management
- Storage allocation
- Disk scheduling.
An Operating System provides services to both the users and to the programs.
- It provides programs an environment to execute.
- It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system −
- Program execution
- I/O operations
- File System manipulation
- Error Detection
- Resource Allocation
Operating systems handle many kinds of activities from user programs to system programs like printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a process.
A process includes the complete execution context (code to execute, data to manipulate, registers, OS resources in use). Following are the major activities of an operating system with respect to program management −
- Loads a program into memory.
- Executes the program.
- Handles program’s execution.
- Provides a mechanism for process synchronization.
- Provides a mechanism for process communication.
- Provides a mechanism for deadlock handling.
An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.
- I/O operation means read or write operation with any file or any specific I/O device.
- Operating system provides the access to the required I/O device when required.
File System Manipulation
A file represents a collection of related information. Computers can store files on the disk (secondary storage), for long-term storage purpose. Examples of storage media include magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has its own properties like speed, capacity, data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage. These directories may contain files and other directions. Following are the major activities of an operating system with respect to file management −
- Program needs to read a file or write a file.
- The operating system gives the permission to the program for operation on file.
- Permission varies from read-only, read-write, denied and so on.
- Operating System provides an interface to the user to create/delete files.
- Operating System provides an interface to the user to create/delete directories.
- Operating System provides an interface to create the backup of file system.
In case of distributed systems which are a collection of processors that do not share memory, peripheral devices, or a clock, the operating system manages communications between all the processes. Multiple processes communicate with one another through communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention and security. Following are the major activities of an operating system with respect to communication −
- Two processes often require data to be transferred between them
- Both the processes can be on one computer or on different computers, but are connected through a computer network.
- Communication may be implemented by two methods, either by Shared Memory or by Message Passing.
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the memory hardware. Following are the major activities of an operating system with respect to error handling −
- The OS constantly checks for possible errors.
- The OS takes an appropriate action to ensure correct and consistent computing.
In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles and files storage are to be allocated to each user or job. Following are the major activities of an operating system with respect to resource management −
- The OS manages all kinds of resources using schedulers.
- CPU scheduling algorithms are used for better utilization of CPU.
Considering a computer system having multiple users and concurrent execution of multiple processes, the various processes must be protected from each other’s activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or users to the resources defined by a computer system. Following are the major activities of an operating system with respect to protection −
- The OS ensures that all access to system resources is controlled.
- The OS ensures that external I/O devices are protected from invalid access attempts.
- The OS provides authentication features for each user by means of passwords.
Introduction to System Calls
To understand system calls, first one needs to understand the difference between kernel mode and user mode of a CPU. Every modern operating system supports these two modes.
Modes supported by the operating system
- When CPU is in kernel mode, the code being executed can access any memory address and any hardware resource.
- Hence kernel mode is a very privileged and powerful mode.
- If a program crashes in kernel mode, the entire system will be halted.
- When CPU is in user mode, the programs don’t have direct access to memory and hardware resources.
- In user mode, if any program crashes, only that particular program is halted.
- That means the system will be in a safe state even if a program in user mode crashes.
- Hence, most programs in an OS run in user mode.
In computing, a system call is the programmatic way in which a computer program requests a service from the kernel of the operating system it is executed on. A system call is a way for programs to interact with the operating system. A computer program makes a system call when it makes a request to the operating system’s kernel. System call provides the services of the operating system to the user programs via Application Program Interface(API). It provides an interface between a process and operating system to allow user-level processes to request services of the operating system. System calls are the only entry points into the kernel system. All programs needing resources must use system calls.
When a program in user mode requires access to RAM or a hardware resource, it must ask the kernel to provide access to that resource. This is done via something called a system call.
When a program makes a system call, the mode is switched from user mode to kernel mode. This is called a context switch.
Then the kernel provides the resource which the program requested. After that, another context switch happens which results in change of mode from kernel mode back to user mode.
Generally, system calls are made by the user level programs in the following situations:
- Creating, opening, closing and deleting files in the file system.
- Creating and managing new processes.
- Creating a connection in the network, sending and receiving packets.
- Requesting access to a hardware device, like a mouse or a printer.
In a typical UNIX system, there are around 300 system calls. Some of them which are important ones in this context, are described below.
The fork() system call is used to create processes. When a process (a program in execution) makes a fork() call, an exact copy of the process is created. Now there are two processes, one being the parent process and the other being the child process.
The process which called the fork() call is the parent process and the process which is created newly is called the child process. The child process will be exactly the same as the parent. Note that the process state of the parent i.e., the address space, variables, open files etc. is copied into the child process. This means that the parent and child processes have identical but physically different address spaces. The change of values in parent process doesn’t affect the child and vice versa is true too.
Both processes start execution from the next line of code i.e., the line after the fork() call. Let’s look at an example:
val = fork(); // line A
printf("%d", val); // line B
When the above example code is executed, when line A is executed, a child process is created. Now both processes start execution from line B. To differentiate between the child process and the parent process, we need to look at the value returned by the fork() call.
The difference is that, in the parent process, fork() returns a value which represents the process ID of the child process. But in the child process, fork() returns the value 0.
This means that according to the above program, the output of parent process will be the process ID of the child process and the output of the child process will be 0.
The exec() system call is also used to create processes. But there is one big difference between fork() and exec() calls. The fork() call creates a new process while preserving the parent process. But, an exec() call replaces the address space, text segment, data segment etc. of the current process with the new process.
It means, after an exec() call, only the new process exists. The process which made the system call, wouldn’t exist.
There are many flavors of exec() in UNIX, one being exec1() which is shown below as an example:
execl("/bin/ls", "ls", 0); // line A
printf("This text won't be printed unless an error occurs in exec().");
As shown above, the first parameter to the execl() function is the address of the program which needs to be executed, in this case, the address of the ls utility in UNIX. Then it is followed by the name of the program which is ls in this case and followed by optional arguments. Then the list should be terminated by a NULL pointer (0).
When the above example is executed, at line A, the ls program is called and executed and the current process is halted. Hence the printf() function is never called since the process has already been halted. The only exception to this is that, if the execl() function causes an error, then the printf() function is executed.
Services Provided by System Calls :
- Process creation and management
- Main memory management
- File Access, Directory and File system management
- Device handling(I/O)
- Networking, etc.
Types of System Calls : There are 5 different categories of system calls –
- Process control: end, abort, create, terminate, allocate and free memory.
- File management: create, open, close, delete, read file etc.
- Device management
- Information maintenance
Examples of Windows and Unix System Calls –