The first process started on a Unix system is called init. It typically starts by rummaging through /etc/inittab to see what processes should be started.
This often begins with running a script named something like /etc/init.d/rcS which controls how to start up the whole variety of system services such as configuring networking, logging, running file servers like NFS , Coda , and SAMBA, as well as other daemons such as mail servers , web servers , database servers, and such.
Once the system services are started, init will start up getty processes to allow users to log in. There are two traditional approaches to it:
BSD Init
Where the startup of services is controlled by putting those services into a shell script that is executed.
System V Init
Where services each have a script, commonly placed in /etc/init.d/ or /etc/rc.d/, which then are linked into a set of directories, one for each run level.
The startup script looks at all the scripts in the runlevel's directory, and invokes them in the order indicated.
The SYSV approach is not particularly elegant, but it is straightforward to write programs to predictably manipulate the set of startup scripts, which is not practical with the BSD approach. Proponents for both approaches can easily find instances where their favorite approach is to be preferred.
Here are links to some of the "traditional" init implementations:
People have built alternatives, trying to fulfil one improvement or another. Some have built "minimalist" alternatives to init, in one case resorting to assembly language to make it as tiny as possible. The more interesting alternatives try to improve functionality and/or reliability:
Starting up system services is typically the most time consuming parts of starting up a Unix system. Traditional init starts up processes serially, one after another, when often they do not truly depend on one another.
For instance, ntp depends on having network services running, but then goes off and queries remote servers to get updated time information. It would be quite reasonable to spawn ntpd as soon as the network is up, and let it proceed in the background with those queries.
Similarly, the JBoss Java application server requires that there be some network functionality working, but there is no need for other services that do not use JBoss to wait for it. The SN news server does some reorganization of the news spool at startup; nothing needs to wait for that to complete.
serel tries to maximize the degree to which startup processes may be spawned in parallel.
Richard Gooch has made up a dependancy-oriented alternative to the traditional service startup approaches, based on a program called need.
Each service that is to be run has its own script in /sbin/init.d , and init runs each, in random order. Each service script contains a set of need commands indicating what services it depends on. The dependancies get run first.
A provides command addresses the situation where a program needs a "logical service" that might be satisfied by multiple "actual services." For instance, there are several mail transfer agents, including Sendmail , Postfix , and qmail , and a service that "needs mail services" should not explicitly depend on any one of them, but rather on having a MTA service that may be provided by any of these applications.
Unfortunately, it does not appear to address the parallelism matter, where it would be attractive to start up services that are not mutually dependent concurrently.
The init in the Mastodon distribution takes a very similar approach.
A minimal init.
twsinit is a version of init that is written mostly in IA-32 assembler, with the result that it is extremely tiny. Does this solve any real problem? Who can tell?
Using DJB's svscan tool from "daemontools"
svscan starts and monitors a set of processes at some directory base. It starts a supervise process for each subdirectory, running the program referenced in the subdirectory. Every five seconds thereafter, svscan checks for both new subdirectories and for subdirectories where the supervisor has "died," and starts up new supervisors.
This would certainly address the "parallelism" side of things, and also has the ability to (try to) ensure that services are always running. Unfortunately, it does not offer a way to address dependancies of some services on others.
Aka GNU dmd aka GNU Daemon Shepherd is a service manager providing a dependency-based daemon management system, implemented in Scheme, originally implemented for use with GNU Hurd, but also usable with Linux.
There be lots of emotional dragons here...
Why Pro SystemD and Anti SystemD people cannot get along
It's actually mostly politics, just not of one of the parties that they recognize
Proponents tend to have afiliation with "modern desktop"
Want more interoperability
Sure, somewhat more "Windows like"
Seek reducing interface complexity but not implementation complexity
Computers as appliances rather than tools
Detractors often come from niche distributions
Less interested in desktop advances
Care about malleability more than user friendliness
Computers as tools
The fight is more about desktop vs minimalism
A moderate annoyance: systemd includes *journald*, which does logging
Why not use *syslog*???
Because you cannot use *syslog* to log things until *syslog* is running
And *syslog* is not running until a whole bunch of other services get started, by *systemd*
Ergo, *systemd* needs a logging system that is not syslog
It may be desirable to forward responsibility for logging to *syslog*, once it is running, and *systemd* supports that
objects that systemd can manage
standardized representation of system resources
like services, jobs in init systems
can use units to abstract services, network resources, filesystem mounts, resource pools
Various activation methods
socket based
start unit upon accessing a socket, like xinetd
bus based
start unit when DBus request is submitted
path based
start unit when inotify indicates a path is available
device based
start unit when hardware is available based on udev events
With dependency mapping/ordering
~.service~
indicates how to manage a service/application, including
how to start/stop the service
when service should be automatically started
dependency and ordering information for related software
~.socket~
indicates network/IPC socket or FIFO that is to be used for socket-based activation
points to ~.service~ file
~.device~
indicates a device needs systemd management
often needed for ordering, mounting, accessing the device
~.mount~
defines a mountpoint to be managed by systemd
named after mount path with slashes transformed to dashes
/etc/fstab entries automatically get such units (how???)
~.automount~
indicates mount points to be automatically mounted
refers to a ~.mount~ file
~.swap~
indicates swap space
~.target~
target unit provides synchronization points for other units when changing state
seems to be a way of indicating dependencies a bit abstractly
~.path~
indicates cases where, when a path reaches desired state, the service will be started
uses inotify to watch for changes
e.g. - do not start up postgresql-database until the filesystem with data has been mounted
~.timer~
indicates a timer for systemd to manage indicating when the service unit will be started
can be used to do things like cron jobs
~.snapshot~
~.slice~
associates with Linux Control Group nodes to restrict access to resources to processes in the slice
so, security-ish???
samples on Debian say NOTHING about this, so perhaps they're no-ops?
~.scope~
help (how?) to manage system processes created externally (from systemd?)
Default unit files in /lib/systemd/system
Customize into /etc/systemd/system
unit section
defines metadata
Description
what is this?
Documentation
reference
Requires
units required before this one (failure to have them indicate this unit fails)
Wants
units wanted (less strict)
BindsTo
this unit should terminate when another one terminates
Before
units specified must not start until after the present one is started, but this is not indicating dependency
After
this unit must not start until the specified units are started
Conflicts
mutual exclusions of units
Condition
things to test prior to starting the unit
failure leads to graceful skipping of the unit
PathExists
Capability
PathIsReadWrite
DirectoryNotEmpty
FileIsExecutible
KernelCommandLine
NeedsUpdate
PathIsDirectory
PathIsMountPoint
PathIsSymbolicLink
Virtualization
Assert
failure of conditions lead to reporting failure of the unit
Various other sections specialized to various sorts of units
Transaction manager
Jobs
Job queueing
Tasks
What is journald?
Configuration that you mess with tends to be in ~/etc/systemd/system~
Finding all services #+BEGIN_EXAMPLE # systemctl list-units #+END_EXAMPLE
This lists loaded units, and does so if you omit ~list-units~
~systemctl list-units --all~
list non-loaded units
~systemctl list-unit-files~
list unit files
~systemctl list-dependencies~
list the hierarchy of dependencies
and other things can be listed
The common start/stop/restart targets work
~systemctl start boa~
~systemctl stop boa~
~systemctl reload boa~
~systemctl force-reload boa~
More operations that should be obvious-ish
~systemctl cat boa~
list the unit files
~systemctl disable boa~
disable boa
~systemctl reenable boa~
reenable boa
~systemctl enable boa~
enable boa
~systemctl edit boa~
edit boa units (copying to /etc/systemd/system if needed)
~systemctl show boa~
show environment values and such
Set up a unit file
[Unit] Description=Emacs: the extensible, self-documenting text editor [Service] Type=forking ExecStart=/usr/bin/emacs --daemon ExecStop=/usr/bin/emacsclient --eval "(kill-emacs)" Environment=SSH_AUTH_SOCK=%t/keyring/ssh Restart=always [Install] WantedBy=default.target |
Here is the curiosity that apparently doesn't work out...
Some SystemD units are batch jobs; it is common for there to be some examples set up so that SystemD will run daily maintenance tasks.
It occurs to me, "Wouldn't it be neat if I could set up all my batch jobs as SystemD units? "
Unfortunately, SystemD is intended to manage "the system", and, as a result, jobs are intended to be system-wide jobs, not as individual things that a user would manage for their self.
Awkward... And it suggests that there's something off about how SystemD considers batch jobs if we "deeply" need a different mechanism for such.