scheduling

In computing, scheduling is the method by which threads, processes or data flows are given access to system resources (e.g. processor time, communications bandwidth). This is usually done to load balance and share system resources effectively or achieve a target quality of service. The need for a scheduling algorithm arises from the requirement for most modern systems to perform multitasking (executing more than one process at a time) and multiplexing (transmit multiple data streams simultaneously across a single physical channel). The scheduler is concerned mainly with the throughput (the total number of processes that complete their execution per time unit), latency (specifically the turnaround time, as a total time between submission of a process and its completion, and the response time, as a time from submission of a process to the first time it is scheduled), fairness (equal CPU time to each process, or more generally appropriate times according to each process’ priority and workload), and waiting time (the time the process remains in the ready queue). In practice, these goals often conflict (e.g. throughput versus latency), thus a scheduler will implement a suitable compromise. Preference is given to any one of the concerns mentioned above, depending upon the user’s needs and objectives. In real-time environments, such as embedded systems for automatic control in industry (for example robotics), the scheduler also must ensure that processes can meet deadlines; this is crucial for keeping the system stable. Scheduled tasks can also be distributed to remote devices across a network and managed through an administrative back end.

Join our community and Get 15% discount on our licenses plans

we’ll never share your data with thirs party entities and we don’t like spam