output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
The short answer is, fork is in Unix because it was easy to fit into the existing system at the time, and because a predecessor system at Berkeley had used the concept of forks. From The Evolution of the Unix Time-sharing System (relevant text has been highlighted):Process control in its modern form was designed and implemented within a couple of days. It is astonishing how easily it fitted into the existing system; at the same time it is easy to see how some of the slightly unusual features of the design are present precisely because they represented small, easily-coded changes to what existed. A good example is the separation of the fork and exec functions. The most common model for the creation of new processes involves specifying a program for the process to execute; in Unix, a forked process continues to run the same program as its parent until it performs an explicit exec. The separation of the functions is certainly not unique to Unix, and in fact it was present in the Berkeley time-sharing system, which was well-known to Thompson. Still, it seems reasonable to suppose that it exists in Unix mainly because of the ease with which fork could be implemented without changing much else. The system already handled multiple (i.e. two) processes; there was a process table, and the processes were swapped between main memory and the disk. The initial implementation of fork required only 1) Expansion of the process table 2) Addition of a fork call that copied the current process to the disk swap area, using the already existing swap IO primitives, and made some adjustments to the process table. In fact, the PDP-7's fork call required precisely 27 lines of assembly code. Of course, other changes in the operating system and user programs were required, and some of them were rather interesting and unexpected. But a combined fork-exec would have been considerably more complicated, if only because exec as such did not exist; its function was already performed, using explicit IO, by the shell.Since that paper, Unix has evolved. fork followed by exec is no longer the only way to run a program.vfork was created to be a more efficient fork for the case where the new process intends to do an exec right after the fork. After doing a vfork, the parent and child processes share the same data space, and the parent process is suspended until the child process either execs a program or exits. posix_spawn creates a new process and executes a file in a single system call. It takes a bunch of parameters that let you selectively share the caller's open files and copy its signal disposition and other attributes to the new process.
In Unix whenever we want to create a new process, we fork the current process, creating a new child process which is exactly the same as the parent process; then we do an exec system call to replace all the data from the parent process with that for the new process. Why do we create a copy of the parent process in the first place and not create a new process directly?
Why do we need to fork to create new processes?
There are several different scenarios; I'll describe the most common ones. The successive macroscopic events are:Input: the key press event is transmitted from the keyboard hardware to the application. Processing: the application decides that because the key A was pressed, it must display the character a. Output: the application gives the order to display a on the screen.GUI applications The de facto standard graphical user interface of unix systems is the X Window System, often called X11 because it stabilized in the 11th version of its core protocol between applications and the display server. A program called the X server sits between the operating system kernel and the applications; it provides services including displaying windows on the screen and transmitting key presses to the window that has the focus. Input +----------+ +-------------+ +-----+ | keyboard |------------->| motherboard |-------->| CPU | +----------+ +-------------+ +-----+ USB, PS/2, … PCI, … key down/upFirst, information about the key press and key release is transmitted from the keyboard to the computer and inside the computer. The details depend on the type of hardware. I won't dwell more on this part because the information remains the same throughout this part of the chain: a certain key was pressed or released. +--------+ +----------+ +-------------+ -------->| kernel |------->| X server |--------->| application | +--------+ +----------+ +-------------+ interrupt scancode keysym =keycode +modifiersWhen a hardware event happens, the CPU triggers an interrupt, which causes some code in the kernel to execute. This code detects that the hardware event is a key press or key release coming from a keyboard and records the scan code which identifies the key. The X server reads input events through a device file, for example /dev/input/eventNNN on Linux (where NNN is a number). Whenever there is an event, the kernel signals that there is data to read from that device. The device file transmits key up/down events with a scan code, which may or may not be identical to the value transmitted by the hardware (the kernel may translate the scan code from a keyboard-dependent value to a common value, and Linux doesn't retransmit the scan codes that it doesn't know). X calls the scan code that it reads a keycode. The X server maintains a table that translates key codes into keysyms (short for “key symbol”). Keycodes are numeric, whereas keysyms are names such as A, aacute, F1, KP_Add, Control_L, … The keysym may differ depending on which modifier keys are pressed (Shift, Ctrl, …). There are two mechanisms to configure the mapping from keycodes to keysyms:xmodmap is the traditional mechanism. It is a simple table mapping keycodes to a list of keysyms (unmodified, shifted, …). XKB is a more powerful, but more complex mechanism with better support for more modifiers, in particular for dual-language configuration, among others.Applications connect to the X server and receive a notification when a key is pressed while a window of that application has the focus. The notification indicates that a certain keysym was pressed or released as well as what modifiers are currently pressed. You can see keysyms by running the program xev from a terminal. What the application does with the information is up to it; some applications have configurable key bindings. In a typical configuration, when you press the key labeled A with no modifiers, this sends the keysym a to the application; if the application is in a mode where you're typing text, this inserts the character a. Relationship of keyboard layout and xmodmap goes into more detail on keyboard input. How do mouse events work in linux? gives an overview of mouse input at the lower levels. Output +-------------+ +----------+ +-----+ +---------+ | application |------->| X server |---····-->| GPU |-------->| monitor | +-------------+ +----------+ +-----+ +---------+ text or varies VGA, DVI, image HDMI, …There are two ways to display a character.Server-side rendering: the application tells the X server “draw this string in this font at this position”. The font resides on the X server. Client-side rendering: the application builds an image that represents the character in a font that it chooses, then tells the X server to display that image.See What are the purposes of the different types of XWindows fonts? for a discussion of client-side and server-side text rendering under X11. What happens between the X server and the Graphics Processing Unit (the processor on the video card) is very hardware-dependent. Simple systems have the X server draw in a memory region called a framebuffer, which the GPU picks up for display. Advanced systems such as found on any 21st century PC or smartphone allow the GPU to perform some operations directly for better performance. Ultimately, the GPU transmits the screen content pixel by pixel every fraction of a second to the monitor. Text mode application, running in a terminal If your text editor is a text mode application running in a terminal, then it is the terminal which is the application for the purpose of the section above. In this section, I explain the interface between the text mode application and the terminal. First I describe the case of a terminal emulator running under X11. What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'? may be useful background here. After reading this, you may want to read the far more detailed What are the responsibilities of each Pseudo-Terminal (PTY) component (software, master side, slave side)? Input +-------------------+ +-------------+ ----->| terminal emulator |-------------->| application | +-------------------+ +-------------+ keysym character or escape sequenceThe terminal emulator receives events like “Left was pressed while Shift was down”. The interface between the terminal emulator and the text mode application is a pseudo-terminal (pty), a character device which transmits bytes. When the terminal emulator receives a key press event, it transforms this into one or more bytes which the application gets to read from the pty device. Printable characters outside the ASCII range are transmitted as one or more byte depending on the character and encoding. For example, in the UTF-8 encoding of the Unicode character set, characters in the ASCII range are encoded as a single bytes, while characters outside that range are encoded as multiple bytes. Key presses that correspond to a function key or a printable character with modifiers such as Ctrl or Alt are sent as an escape sequence. Escape sequences typically consist of the character escape (byte value 27 = 0x1B = \033, sometimes represented as ^[ or \e) followed by one or more printable characters. A few keys or key combination have a control character corresponding to them in ASCII-based encodings (which is pretty much all of them in use today, including Unicode): Ctrl+letter yields a character value in the range 1–26, Esc is the escape character seen above and is also the same as Ctrl+[, Tab is the same as Ctrl+I, Return is the same as Ctrl+M, etc. Different terminals send different escape sequences for a given key or key combination. Fortunately, the converse is not true: given a sequence, there is in practice at most one key combination that it encodes. The one exception is the character 127 = 0x7f = \0177 which is often Backspace but sometimes Delete. In a terminal, if you type Ctrl+V followed by a key combination, this inserts the first byte of the escape sequence from the key combination literally. Since escape sequences normally consist only of printable characters after the first one, this inserts the whole escape sequence literally. See key bindings table? for a discussion of zsh in this context. The terminal may transmit the same escape sequence for some modifier combinations (e.g. many terminals transmit a space character for both Space and Shift+Space; xterm has a mode to distinguish modifier combinations but terminals based on the popular vte library don't). A few keys are not transmitted at all, for example modifier keys or keys that trigger a binding of the terminal emulator (e.g. a copy or paste command). It is up to the application to translate escape sequences into symbolic key names if it so desires. Output +-------------+ +-------------------+ | application |-------------->| terminal emulator |---> +-------------+ +-------------------+ character or escape sequenceOutput is rather simpler than input. If the application outputs a character to the pty device file, the terminal emulator displays it at the current cursor position. (The terminal emulator maintains a cursor position, and scrolls if the cursor would fall under the bottom of the screen.) The application can also output escape sequences (mostly beginning with ^[ or ^]) to tell the terminal to perform actions such as moving the cursor, changing the text attributes (color, bold, …), or erasing part of the screen. Escape sequences supported by the terminal emulator are described in the termcap or terminfo database. Most terminal emulator nowadays are fairly closely aligned with xterm. See Documentation on LESS_TERMCAP_* variables? for a longer discussion of terminal capability information databases, and How to stop cursor from blinking and Can I set my local machine's terminal colors to use those of the machine I ssh into? for some usage examples. Application running in a text console If the application is running directly in a text console, i.e. a terminal provided by the kernel rather than by a terminal emulator application, the same principles apply. The interface between the terminal and the application is still a byte stream which transmits characters, with special keys and commands encoded as escape sequences. Remote application, accessed over the network Remote text application If you run a program on a remote machine, e.g. over SSH, then the network communication protocol relays data at the pty level. +-------------+ +------+ +-----+ +----------+ | application |<--------->| sshd |<--------->| ssh |<--------->| terminal | +-------------+ +------+ +-----+ +----------+ byte stream byte stream byte stream (char/seq) over TCP/… (char/seq)This is mostly transparent, except that sometimes the remote terminal database may not know all the capabilities of the local terminal. Remote X11 application The communication protocol between applications an the server is itself a byte stream that can be sent over a network protocol such as SSH. +-------------+ +------+ +-----+ +----------+ | application |<---------->| sshd |<------>| ssh |<---------->| X server | +-------------+ +------+ +-----+ +----------+ X11 protocol X11 over X11 protocol TCP/…This is mostly transparent, except that some acceleration features such as movie decoding and 3D rendering that require direct communication between the application and the display are not available.
Suppose I press the A key in a text editor and this inserts the character a in the document and displays it on the screen. I know the editor application isn't directly communicating with the hardware (there's a kernel and stuff in between), so what is going on inside my computer?
How do keyboard input and text output work?
On Debian and derivatives, dpkg --print-architecturewill output the primary architecture of the machine it’s run on. This will be armhf on a machine running 32-bit ARM Debian or Ubuntu (or a derivative), arm64 on a machine running 64-bit ARM. On RPM-based systems, rpm --eval '%{_arch}'will output the current architecture name (which may be influenced by other parameters, e.g. --target). Note that the running architecture may be different from the hardware architecture or even the kernel architecture. It’s possible to run i386 Debian on a 64-bit Intel or AMD CPU, and I believe it’s possible to run armhf on a 64-bit ARM CPU. It’s also possible to have mostly i386 binaries (so the primary architecture is i386) on an amd64 kernel, or even binaries from an entirely different architecture if it’s supported by QEMU (a common use for this is debootstrap chroots used for cross-compiling).
I'm trying to write a script which will determine actions based on the architecture of the machine. I already use uname -m to gather the architecture line, however I do not know how many ARM architectures there are, nor do I know whether one is armhf, armel, or arm64. As this is required for this script to determine whether portions of the script can be run or not, I am trying to find a simple way to determine if the architecture is armhf, armel or arm64. Is there any one-liner or simple command that can be used to output either armhf, armel, or arm64? The script is specifically written for Debian and Ubuntu systems, and I am tagging as such with this in mind (it quits automatically if you aren't on one of those distros, but this could be applied in a much wider way as well if the command(s) exist)EDIT: Recently learned that armel is dead, and arm64 software builders (PPA or virtual based) aren't the most stable. So I have a wildcard search finding arm* and assuming armhf, but it's still necessary to figure out a one liner that returns one of the three - whether it's a Ubuntu/Debian command or a kernel call or something.
Easy command line method to determine specific ARM architecture string?
The tradition in unix tools is to display messages only if something goes wrong. I think this is both for design and practical reasons. The design is intended to make it obvious when something goes wrong: you get an error message, and it's not drowned in not-actually-informative messages. The practical reason is that in unix's very early days, there still were teleprinters; that is, the output from programs would be printed on paper, and you don't want to print progress bars. Whatever the reason, the tradition of only displaying useful messages has stuck in the unix world. Modern tools have sometimes introduced progress bars; in rsync's case, the main motivation is that rsync is often performed over the network, and networks are a lot flakier than local disks, so the progress bar is more useful. The same reasoning applies to wget.
Please note that I don't ask how. I already know options like pv and rsync -P. I want to ask why doesn't cp implement a progress bar, at least as a flag ?
Why doesn't cp have a progress bar like wget?
It depends. Something compiled for IA-32 (Intel 32-bit) may run on amd64 as Linux on Intel retains backwards compatibility with 32-bit applications (with suitable software installed). Here's your code compiled on RedHat 7.3 32-bit system (circa 2002, gcc version 2.96) and then the binary copied over to and run on a Centos 7.4 64-bit system (circa 2017): -bash-4.2$ file code code: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped -bash-4.2$ ./code -bash: ./code: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory -bash-4.2$ sudo yum -y install glibc.i686 ... -bash-4.2$ ./code ; echo $? 99Ancient RedHat 7.3 to Centos 7.4 (essentially RedHat Enterprise Linux 7.4) is staying in the same "distribution" family, so will likely have better portability than going from some random "Linux from scratch" install from 2002 to some other random Linux distribution in 2018. Something compiled for amd64 would not run on 32-bit only releases of Linux (old hardware does not know about new hardware). This is also true for new software compiled on modern systems intended to be run on ancient old things, as libraries and even system calls may not be backwards portable, so may require compilation tricks, or obtaining an old compiler and so forth, or possibly instead compiling on the old system. (This is a good reason to keep virtual machines of ancient old things around.) Architecture does matter; amd64 (or IA-32) is vastly different from ARM or MIPS so the binary from one of those would not be expected to run on another. At the assembly level the main section of your code on IA-32 compiles via gcc -S code.c to main: pushl %ebp movl %esp,%ebp movl $99,%eax popl %ebp retwhich an amd64 system can deal with (on a Linux system--OpenBSD by contrast on amd64 does not support 32-bit binaries; backwards compatibility with old archs does give attackers wiggle room, e.g. CVE-2014-8866 and friends). Meanwhile on a big-endian MIPS system main instead compiles to: main: .frame $fp,8,$31 .mask 0x40000000,-4 .fmask 0x00000000,0 .set noreorder .set nomacro addiu $sp,$sp,-8 sw $fp,4($sp) move $fp,$sp li $2,99 move $sp,$fp lw $fp,4($sp) addiu $sp,$sp,8 j $31 nopwhich an Intel processor will have no idea what to do with, and likewise for the Intel assembly on MIPS. You could possibly use QEMU or some other emulator to run foreign code (perhaps very, very slowly). However! Your code is very simple code, so will have fewer portability issues than anything else; programs typically make use of libraries that have changed over time (glibc, openssl, ...); for those one may also need to install older versions of various libraries (RedHat for example typically puts "compat" somewhere in the package name for such) compat-glibc.x86_64 1:2.12-4.el7.centosor possibly worry about ABI changes (Application Binary Interface) for way old things that use glibc, or more recently changes due to C++11 or other C++ releases. One could also compile static (greatly increasing the binary size on disk) to try to avoid library issues, though whether some old binary did this depends on whether the old Linux distribution was compiling most everything dynamic (RedHat: yes) or not. On the other hand, things like patchelf can rejigger dynamic (ELF, but probably not a.out format) binaries to use other libraries. However! Being able to run a program is one thing, and actually doing something useful with it another. Old 32-bit Intel binaries may have security issues if they depend on a version of OpenSSL that has some horrible and not-backported security problem in it, or the program may not be able to negotiate at all with modern web servers (as the modern servers reject the old protocols and ciphers of the old program), or SSH protocol version 1 is no longer supported, or ...
Will the executable of a small, extremely simple program, such as the one shown below, that is compiled on one flavor of Linux run on a different flavor? Or would it need to be recompiled? Does machine architecture matter in a case such as this? int main() { return (99); }
Will a Linux executable compiled on one "flavor" of Linux run on a different one?
"Everything is a file" is a bit glib. "Everything appears somewhere in the file system" is closer to the mark, and even then, it's more an ideal than a law of system design. For example, Unix domain sockets are not files, but they do appear in the file system. You can ls -l a domain socket to display its attributes, modify its access control via chmod, and on some Unix type systems (e.g. macOS, but not Linux) you can even cat data to/from one. But, even though regular TCP/IP network sockets are created and manipulated with the same BSD sockets system calls as Unix domain sockets, TCP/IP sockets do not show up in the file system,¹ even though there is no especially good reason that this should be true. Another example of non-file objects appearing in the file system is Linux's /proc file system. This feature exposes a great amount of detail about the kernel's run-time operation to user space, mostly as virtual plain text files. Many /proc entries are read-only, but a lot of /proc is also writeable, so you can change the way the system runs using any program that can modify a file. Alas, here again we have a nonideality: BSD Unixes run without /proc by default, and the System V Unixes expose a lot less via /proc than Linux does.I can't contrast that to MS WindowsFirst, much of the sentiment you can find online and in books about Unix being all about file I/O and Windows being "broken" in this regard is obsolete. Windows NT fixed a lot of this. Modern versions of Windows have a unified I/O system, just like Unix, so you can read network data from a TCP/IP socket via ReadFile() rather than the Windows Sockets specific API WSARecv(), if you want to. This exactly parallels the Unix Way, where you can read from a network socket with either the generic read(2) Unix system call or the sockets-specific recv(2) call.² Nevertheless, Windows still fails to take this concept to the same level as Unix, even here in 2021. There are many areas of the Windows architecture that cannot be accessed through the file system, or that can't be viewed as file-like. Some examples:Drivers. Windows' driver subsystem is easily as rich and powerful as Unix's, but to write programs to manipulate drivers, you generally have to use the Windows Driver Kit, which means writing C or .NET code. On Unix type OSes, you can do a lot to drivers from the command line. You've almost certainly already done this, if only by redirecting unwanted output to /dev/null.³Inter-program communication. Windows programs don't communicate easily with each other as Unix command line programs do, via text streams and pipes. Unix GUIs are often either built on top of command line programs or export a text command interface, so the same simple text-based communication mechanisms work with GUI programs, too.The registry. Unix has no direct equivalent of the Windows registry. The same information is scattered through the file system, largely in /etc, /proc and /sys.If you don't see that drivers, pipes, and Unix's answer to the Windows registry have anything to do with "everything is a file," read on.How does the "Everything is a file" philosophy make a difference here?I will explain that by expanding on my three points above, in detail. Long answer, part 1: Drives vs Device Files Let's say your CF card reader appears as E: under Windows and /dev/sdc under Linux. What practical difference does it make? It is not just a minor syntax difference. On Linux, I can say dd if=/dev/zero of=/dev/sdc to overwrite the contents of /dev/sdc with zeroes. Think about what that means for a second. Here I have a normal user space program (dd(1)) that I asked to read data in from a virtual device (/dev/zero) and write what it read out to a real physical device (/dev/sdc) via the unified Unix file system. dd doesn't know it is reading from and writing to special devices. It will work on regular files just as well, or on a mix of devices and files, as we will see below. There is no easy way to zero the E: drive on Windows because Windows makes a distinction between files and drives, so you cannot use the same commands to manipulate them. The closest you can get is to do a disk format without the Quick Format option, which zeroes most of the drive contents, but then writes a new file system on top of it. What if I don't want a new file system? What if I really do want the disk to be filled with nothing but zeroes? Let's be generous and accept this requirement to put a fresh new file system on E:. To do that in a program on Windows, I have to call a special formatting API.⁴ On Linux, you don't need to write a program to access the OS's "format disk" functionality: you just run the appropriate user space program for the file system type you want to create, whether that's mkfs.ext4, mkfs.xfs, or what have you. These programs will write a file system onto whatever file or /dev node you pass. Because mkfs type programs on Unixy systems don't make artificial distinctions between devices and normal files, I can create an ext4 file system inside a normal file on my Linux box: $ dd if=/dev/zero of=myfs bs=1k count=1k $ mkfs.ext4 -F myfsThat creates a 1 MiB disk image called myfs in the current directory. I can then mount it as if it were any other external file system: $ mkdir mountpoint $ sudo mount -o loop myfs mountpoint $ grep $USER /etc/passwd > mountpoint/my-passwd-entry $ sudo umount mountpointNow I have an ext4 disk image with a file called my-passwd-entry in it which contains my user's /etc/passwd entry. If I want, I can blast that image onto my CF card: $ sudo dd if=myfs of=/dev/sdc1Or, I can pack that disk image up, mail it to you, and let you write it to a medium of your choosing, such as a USB memory stick: $ gzip myfs $ echo "Here's the disk image I promised to send you." | mutt -a myfs.gz -s "Password file disk image" \ [emailprotected]All of this is possible on Linux⁵ because there is no artificial distinction between files, file systems, and devices. Many things on Unix systems either are files, or are accessed through the file system so they look like files, or in some other way look sufficiently file-like that they can be treated as such. Windows' concept of the file system is a hodgepodge; it makes distinctions between directories, drives, and network resources. There are three different syntaxes, all blended together in Windows: the Unix-like ..\FOO\BAR path system, drive letters like C:, and UNC paths like \\SERVER\PATH\FILE.TXT. This is because it's an accretion of ideas from Unix, CP/M, MS-DOS, and LAN Manager rather than a single coherent design. It is why there are so many illegal characters in Windows file names. Unix has a unified file system, with everything accessed by a single common scheme. To a program running on a Linux box, there is no functional difference between /etc/passwd, /media/CF_CARD/etc/passwd, and /mnt/server/etc/passwd. Local files, external media, and network shares all get treated the same way.⁶ Windows can achieve similar ends to my disk image example above, but you have to use special programs written by uncommonly talented programmers. This is why there are so many "virtual DVD" type programs on Windows. The lack of a core OS feature has created an artificial market for programs to fill the gap, which means you have a bunch of people competing to create the best virtual DVD type program. We don't need such programs on *ix systems, because we can just mount an ISO disk image using a loop device. The same goes for other tools like disk wiping programs, which we also don't need on Unix systems. Want your CF card's contents irretrievably scrambled instead of just zeroed? Okay, use /dev/random as the data source instead of /dev/zero: $ sudo dd if=/dev/random of=/dev/sdcOn Linux, we don't keep reinventing such wheels because the core OS features not only work well enough, they work so well they're used pervasively. One of several ways for booting a Linux box involves a virtual disk image created using techniques like I show above. I feel it's only fair to point out that if Unix had integrated TCP/IP I/O into the file system from the start, we wouldn't have the netcat vs socat vs ncat vs nc mess, the cause of which was the same design weakness that led to the disk imaging and wiping tool proliferation on Windows: lack of an acceptable OS facility. Long Answer, part 2: Pipes as Virtual Files Despite its roots in MS-DOS, Windows never has had a rich command line tradition. This is not to say that Windows doesn't have a command line, or that it lacks many command line programs. Windows even has a very powerful command shell these days, appropriately called PowerShell. Yet, there are knock-on effects of this lack of a command-line tradition. You get tools like DISKPART which is almost unknown in the Windows world, because most people do disk partitioning and such through the Computer Management MMC snap-in. Then when you do need to script the creation of partitions, you find that DISKPART wasn't really made to be driven by another program. Yes, you can write a series of commands into a script file and run it via DISKPART /S scriptfile, but it's all-or-nothing. What you really want in such a situation is something more like GNU parted, which will accept single commands like parted /dev/sdb mklabel gpt. That allows your script to do error handling on a step-by-step basis. What does all this have to do with "everything is a file"? Easy: pipes make command line program I/O into "files," of a sort. Pipes are unidirectional streams, not random-access like a regular disk file, but in many cases the difference is of no consequence. The important thing is that you can attach two independently developed programs and make them communicate via simple text. In that sense, any two programs designed with the Unix Way in mind can communicate. In those cases where you really do need a file, it is easy to turn program output into a file: $ some-program --some --args > myfile $ vi myfileBut why write the output to a temporary file when the "everything is a file" philosophy gives you a better way? If all you want to do is read the output of that command into a vi editor buffer, you can do that directly from the vi "normal" mode: :r !some-program --some --argsThat inserts that program's output into the active editor buffer at the current cursor position. Under the hood, vi is using pipes to connect the output of the program to a bit of code that uses the same OS calls it would use to read from a file instead. I wouldn't be surprised if the two cases of :r — that is, with and without the ! — both used the same generic data reading loop in all common implementations of vi. I can't think of a good reason not to. This isn't a recent feature of vi, either; it goes clear back to the ancient ed(1) text editor. This powerful idea pops up over and over in Unix. For a second example of this, recall my mutt email command above. The only reason I had to write that as two separate commands is that I wanted the temporary file to be named *.gz so that the email attachment would be correctly named. If I didn't care about the file's name, I could have used process substitution to avoid creating the temporary file: $ echo "Here's the disk image I promised to send you." | mutt -a <(gzip -c myfs) -s "Password file disk image" \ [emailprotected]That turns the output of gzip -c into a FIFO (which is file-like) or a /dev/fd object (which is file-like).⁷ For yet a third way this powerful idea appears in Unix, consider gdb on Linux systems. This is the debugger used for any software written in C and C++. Programmers coming to Unix from other systems look at gdb and almost invariably gripe, "Yuck, it's so primitive!" Then they go searching for a GUI debugger, find one of several that exist, and happily continue their work…often never realizing that the GUI just runs gdb underneath, providing a pretty shell on top of it. There aren't competing low-level debuggers on most Unix systems because there is no need for programs to compete at that level. All we need is one good low-level tool that we can all base our high-level tools on, if that low-level tool communicates easily via pipes. This means we now have a documented debugger interface which would allow drop-in replacement of gdb. It's unfortunate that the primary competitor to gdb didn't take this low-friction path, but that quibble aside, lldb is just as scriptable as gdb. To pull the same thing off on a Windows box, the creators of the replaceable tool would have had to define some kind of formal plugin or automation API. That means it doesn't happen except for the very most popular programs, because it's a lot of work to build both a normal command line user interface and a complete programming API. This magic happens through the grace of pervasive text-based IPC. Although Windows' kernel has Unix-style anonymous pipes, it's rare to see normal user programs use them for IPC outside of a command shell, because Windows lacks this tradition of creating all core services in a command line version first, then building the GUI on top of it separately. This leads to being unable to do some things without the GUI, which is one reason why there are so many remote desktop systems for Windows, as compared to Linux. This is doubtless part of the reason why Linux is the operating system of the cloud, where everything's done by remote management. Command line interfaces are easier to automate than GUIs in large part because "everything is a file." Consider SSH. You may ask, how does it work? SSH connects a network socket (which is file-like) to a pseudo tty at /dev/pty* (which is file-like). Now your remote system is connected to your local one through a connection that so seamlessly matches the Unix Way that you can pipe data through the SSH connection, if you need to. Are you getting an idea of just how powerful this concept is now? A piped text stream is indistinguishable from a file from a program's perspective, except that it's unidirectional. A program reads from a pipe the same way it reads from a file: through a file descriptor. FDs are absolutely core to Unix; the fact that files, pipes, and network sockets all use the same abstraction for I/O on both should tell you something. The Windows world, lacking this tradition of simple text communications, makes do with heavyweight OOP interfaces via COM or .NET. If you need to automate such a program, you must also write a COM or .NET program. This is a fair bit more difficult than setting up a pipe on a Unix box. Windows programs lacking these complicated programming APIs can only communicate through impoverished interfaces like the clipboard or File/Save followed by File/Open. Long Answer, part 3: The Registry vs Configuration Files The practical difference between the Windows registry and the Unix Way of system configuration also illustrates the benefits of the "everything is a file" philosophy. On Unix type systems, I can look at system configuration information from the command line merely by examining files. I can change system behavior by modifying those same files. For the most part, these configuration files are just plain text files, which means I can use any tool on Unix to manipulate them that can work with plain text files. Scripting the registry is not nearly so easy on Windows. The easiest method is to make your changes through the Registry Editor GUI on one machine, then blindly apply those changes to other machines with regedit via *.reg files. That isn't really "scripting," since it doesn't let you do anything conditionally: it's all or nothing. If your registry changes need any amount of logic, the next easiest option is to learn PowerShell, which amounts to learning .NET system programming. It would be like if Unix only had Perl, and you had to do all ad hoc system administration through it. Now, I'm a Perl fan, but not everyone is. Unix lets you use any tool you happen to like, as long as it can manipulate plain text files.Footnotes:Plan 9 fixed this design misstep, exposing network I/O via the /net virtual file system. Bash has /dev/tcp that allows network I/O via regular file system functions. Since it is a Bash feature, rather a kernel feature, it isn't visible outside of Bash or on systems that don't use Bash at all. This shows, by counterexample, why it is such a good idea to make all data resources visible through the file system.By "modern Windows," I mean Windows NT and all of its direct descendants, which includes Windows 2000, all versions of Windows Server, and all desktop-oriented versions of Windows from XP onward. I use the term to exclude the MS-DOS-based versions of Windows, being Windows 95 and its direct descendants, Windows 98 and Windows ME, plus their 16-bit predecessors. You can see the distinction by the lack of a unified I/O system in those latter OSes. You cannot pass a TCP/IP socket to ReadFile() on Windows 95; you can only pass sockets to the Windows Sockets APIs. See Andrew Schulman's seminal article, Windows 95: What It's Not for a deeper dive into this topic.Make no mistake, /dev/null is a real kernel device on Unix type systems, not just a special-cased file name, as is the superficially equivalent NUL in Windows. Although Windows tries to prevent you from creating a NUL file, it is possible to bypass this protection with mere trickery, fooling Windows' file name parsing logic. If you try to access that file with cmd.exe or Explorer, Windows will refuse to open it, but you can write to it via Cygwin, since it opens files using similar methods to the example program, and you can delete it via similar trickery. By contrast, Unix will happily let you rm /dev/null, as long as you have write access to /dev, and let you recreate a new file in its place, all without trickery, because that dev node is just another file. While that dev node is missing, the kernel's null device still exists; it's just inaccessible until you recreate the dev node via mknod. You can even create additional null device dev nodes elsewhere: it doesn't matter if you call it /home/grandma/Recycle Bin, as long as it's a dev node for the null device, it will work exactly the same as /dev/null.There are actually two high-level "format disk" APIs in Windows: SHFormatDrive() and Win32_Volume.Format(). There are two for a very…well…Windows sort of reason. The first one asks Windows Explorer to display its normal "Format Disk" dialog box, which means it works on any modern version of Windows, but only while a user is interactively logged in. The other you can call in the background without user input, but it wasn't added to Windows until Windows Server 2003. That's right, core OS behavior was hidden behind a GUI until 2003, in a world where Unix shipped mkfs from day 1. The /etc/mkfs in my copy of Unix V5 from 1974 is a 4136 byte statically-linked PDP-11 executable. (Unix didn't get dynamic linkage until the late 1980s, so it's not like there's a big library somewhere else doing all the real work.) Its source code — included in the V5 system image as /usr/source/s2/mkfs.c — is an entirely self-contained 457-line C program. There aren't even any #include statements! This means you can not only examine what mkfs does at a high level, you can experiment with it using the same tool set Unix was created with, just like you're Ken Thompson, four decades ago. Try that with Windows. The closest you can come today is to download the MS-DOS source code, first released in 2014, which you find amounts to just a pile of assembly sources. It will only build with obsolete tools you probably won't have on-hand, and in the end you get your very own copy of MS-DOS 2.0, an OS far less powerful than 1974's Unix V5, despite its being released nearly a decade later. (Why talk about Unix V5? Because it is the earliest complete Unix system still available. Earlier versions are apparently lost to time. There was a project that pieced together a V1/V2 era Unix, but it appears to be missing mkfs, despite the existence of the V1 manual page linked above proving it must have existed somewhere, somewhen. Either those putting this project together couldn't find an extant copy of mkfs to include, or I suck at finding files without find(1), which also doesn't exist in that system. :)) Now, you might be thinking, "Can't I just call format.com? Isn't that the same on Windows as calling mkfs on Unix?" Alas, no, it isn't the same, for a bunch of reasons:First, format.com wasn't designed to be scripted. It prompts you to "press ENTER when ready", which means you need to send an Enter key to its input, or it'll just hang.Then, if you want anything more than a success/failure status code, you have to open its standard output for reading, which is far more complicated on Windows than it has to be. (On Unix, everything in that linked article can be accomplished with a simple popen(3) call.)Having gone through all this complication, the output of format.com is harder to parse for computer programs than the output of mkfs, being intended primarily for human consumption.If you trace what format.com does, you find that it does a bunch of complicated calls to DeviceIoControl(), ufat.dll, and such. It is not simply opening a device file and writing a new file system onto that device. This is the sort of design you get from a company that employs 221000 people worldwide and needs to keep employing them. Contrast what happens when your core OS tools are written by volunteers in their spare time: they come up with expedient, minimal solutions to their problems that pay simplicity dividends to the rest of us.When talking about loop devices, I talk only about Linux rather than Unix in general because loop devices aren't portable between Unix type systems. There are similar mechanisms in macOS, BSD, etc., but the syntax varies somewhat.Back in the days when disk drives were the size of washing machines and cost more than the department head's luxury car, big computer labs would share a larger proportion of their collective disk space as compared to modern computing environments. The ability to transparently graft a remote disk into the local file system made such distributed systems far easier to use. This is where we get /usr/share, for instance. Contrast Windows, where drive letters offer you few choices for symbolic expression; does P: refer to the "public" space on BigServer or to the "packages" directory on the software mirror server? The UNC alternative requires you to remember which server your remote files are on, which gets difficult in a large organization with hundreds or thousands of file servers. Windows didn't get symlinks until 2007, when Vista introduced NTFS symbolic links, and they weren't made usable until a decade later. Windows' symbolic links are more powerful than Unix's symbolic links — a feature of Unix since since 1977 — in that they can also point to a remote file share, not just to a local path. Unix did that differently, via NFS in 1984, which builds on top of Unix's preexisting mount point feature, which it has had since the beginning. So, depending on how you look at it, Windows trailed Unix by roughly 2, 3, or 4 decades. You may object, "But it has Unix-style symlinks now!" Yet this misses the point, since it means there is no decades-old tradition of using them on Windows, so people are unaware of them in a world where Unix systems use them pervasively. It's impossible to use a Unix system for any significant length of time without learning about symlinks. It doesn't help that Windows' MKLINK program is backwards, and you still can't create them from Windows Explorer, whereas the Unix equivalents to Windows Explorer typically do let you create symlinks.Bash chooses the method based on the system's capabilities since /dev/fd isn't available everywhere.
I know that "Everything is a file" means that even devices have their filename and path in Unix and Unix-like systems, and that this allows for common tools to be used on a variety of resources regardless of their nature. But I can't contrast that to Windows, the only other OS I have worked with. I have read some articles about the concept, but I think they are somewhat uneasy to grasp for non-developers. A layman's explanation is what people need! For example, when I want to copy a file to CF card that is attached to a card reader, I will use something like zcat name_of_file > /dev/sdbIn Windows, I think the card reader will appear as a driver, and we will do something similar, I think. So, how does the "Everything is a file" philosophy make a difference here?
A layman's explanation for "Everything is a file" — what differs from Windows?
The reason why this is permitted is related to what removing a file actually does. Conceptually, rm's job is to remove a name entry from a directory. The fact that the file may then become unreachable if that was the file's only name and that the inode and space occupied by the file can therefore be recovered at that point is almost incidental. The name of the system call that the rm command invokes, which is unlink, is even suggestive of this fact. And, removing a name entry from a directory is fundamentally an operation on that directory, so that directory is the thing that you need to have permission to write.The following scenario may make it feel more comfortable? Suppose there are directories: /home/me # owned and writable only by me /home/you # owned and writable only by youAnd there is a file which is owned by me and which has two hard links: /home/me/myfile /home/you/myfileNever mind how that hard link /home/you/myfile got there in the first place. Maybe root put it there. The idea of this example is that you should be allowed to remove the hard link /home/you/myfile. After all, it's cluttering up your directory. You should be able to control what does and doesn't exist inside /home/you. And when you do remove /home/you/myfile, notice that you haven't actually deleted the file. You've only removed one link to it.Note that if the sticky bit is set on the directory containing a file (shows up as t in ls), then you do need to be the owner of the file in order to be allowed to delete it (unless you own the directory). The sticky bit is usually set on /tmp.
From the post Why can rm remove read-only files? I understand that rm just needs write permission on directory to remove the file. But I find it hard to digest the behaviour where we can easily delete a file who owner and group different. I tried the following mtk : my username abc : created a new user $ ls -l file -rw-rw-r-- 1 mtk mtk 0 Aug 31 15:40 file $ sudo chown abc file $ sudo chgrp abc file $ ls -l file -rw-rw-r-- 1 abc abc 0 Aug 31 15:40 file $ rm file $ ls -l file <deleted>I was thinking this shouldn't have been allowed. A user should be able to delete only files under his ownership? Can someone shed light on why this is permitted? and what is the way to avoid this? I can think only restricting the write permission of the parent directory to dis-allow surprised deletes of file.
Why is rm allowed to delete a file under ownership of a different user?
Dennis Ritchie mentions in «The Evolution of the Unix Time-sharing System» that open and close along with read, write and creat were present in the system right from the start. I guess a system without open and close wouldn't be inconceivable, however I believe it would complicate the design. You generally want to make multiple read and write calls, not just one, and that was probably especially true on those old computers with very limited RAM that UNIX originated on. Having a handle that maintains your current file position simplifies this. If read or write were to return the handle, they'd have to return a pair -- a handle and their own return status. The handle part of the pair would be useless for all other calls, which would make that arrangement awkward. Leaving the state of the cursor to the kernel allows it to improve efficiency not only by buffering. There's also some cost associated with path lookup -- having a handle allows you to pay it only once. Furthermore, some files in the UNIX worldview don't even have a filesystem path (or didn't -- now they do with things like /proc/self/fd).
Why do open() and close() exist in the Unix filesystem design? Couldn't the OS just detect the first time read() or write() was called and do whatever open() would normally do?
On Unix systems, why do we have to explicitly `open()` and `close()` files to be able to `read()` or `write()` them?
A hardware interrupt is not really part of CPU multitasking, but may drive it.Hardware interrupts are issued by hardware devices like disk, network cards, keyboards, clocks, etc. Each device or set of devices will have its own IRQ (Interrupt ReQuest) line. Based on the IRQ the CPU will dispatch the request to the appropriate hardware driver. (Hardware drivers are usually subroutines within the kernel rather than a separate process.)The driver which handles the interrupt is run on the CPU. The CPU is interrupted from what it was doing to handle the interrupt, so nothing additional is required to get the CPU's attention. In multiprocessor systems, an interrupt will usually only interrupt one of the CPUs. (As a special cases mainframes have hardware channels which can deal with multiple interrupts without support from the main CPU.)The hardware interrupt interrupts the CPU directly. This will cause the relevant code in the kernel process to be triggered. For processes that take some time to process, the interrupt code may allow itself to be interrupted by other hardware interrupts. In the case of timer interrupt, the kernel scheduler code may suspend the process that was running and allow another process to run. It is the presence of the scheduler code which enables multitasking.Software interrupts are processed much like hardware interrupts. However, they can only be generated by processes which are currently running.Typically software interrupts are requests for I/O (Input or Output). These will call kernel routines which will schedule the I/O to occur. For some devices the I/O will be done immediately, but disk I/O is usually queued and done at a later time. Depending on the I/O being done, the process may be suspended until the I/O completes, causing the kernel scheduler to select another process to run. I/O may occur between processes and the processing is usually scheduled in the same manner as disk I/O.The software interrupt only talks to the kernel. It is the responsibility of the kernel to schedule any other processes which need to run. This could be another process at the end of a pipe. Some kernels permit some parts of a device driver to exist in user space, and the kernel will schedule this process to run when needed. It is correct that a software interrupt doesn't directly interrupt the CPU. Only code that is currently running code can generate a software interrupt. The interrupt is a request for the kernel to do something (usually I/O) for running process. A special software interrupt is a Yield call, which requests the kernel scheduler to check to see if some other process can run.Response to comment:For I/O requests, the kernel delegates the work to the appropriate kernel driver. The routine may queue the I/O for later processing (common for disk I/O), or execute it immediately if possible. The queue is handled by the driver, often when responding to hardware interrupts. When one I/O completes, the next item in the queue is sent to the device.Yes, software interrupts avoid the hardware signalling step. The process generating the software request must be a currently running process, so they don't interrupt the CPU. However, they do interrupt the flow of the calling code. If hardware needs to get the CPU to do something, it causes the CPU to interrupt its attention to the code it is running. The CPU will push its current state on a stack so that it can later return to what it was doing. The interrupt could stop: a running program; the kernel code handling another interrupt; or the idle process.
I am not sure if I understand the concept of hardware and software interrupts. If I understand correctly, the purpose of a hardware interrupt is to get some attention of the CPU, part of implementing CPU multitasking. Then what issues a hardware interrupt? Is it the hardware driver process? If yes, where is the hardware driver process running? If it is running on the CPU, then it won't have to get attention of the CPU by hardware interrupt, right? So is it running elsewhere? Does a hardware interrupt interrupt the CPU directly, or does it first contact the kernel process and the kernel process then contacts/interrupts the CPU?On the other hand, I think the purpose of a software interrupt is for a process currently running on a CPU to request some resources.What are the resources? Are they all in the form of running processes? For example, do CPU driver process and memory driver processes represent CPU and memory resources? Do the driver process of the I/O devices represent I/O resources? Are other running processes that the process would like to communicate with also resources? If yes, does a software interrupt contact the processes (which represent the resources) indirectly via the kernel process? Is it right that unlike a hardware interrupt, a software interrupt never directly interrupts the CPU, but instead, it interrupts/contacts the kernel process?
What are software and hardware interrupts, and how are they processed?
I can think of three desirable features in a shell:Interactive usability: common commands should be quick to type; completion; ... Programming: data structures; concurrency (jobs, pipe, ...); ... System access: working with files, processes, windows, databases, system configuration, ...Unix shells tend to concentrate on the interactive aspect and subcontract most of the system access and some of the programming to external tools, such as:bc for simple math openssl for cryptography sed, awk and others for text processing nc for basic TCP/IP networking ftp for FTP mail, Mail, mailx, etc. for basic e-mail cron for scheduled tasks wmctrl for basic X window manipulation dcop for KDE ≤3.x libraries dbus tools (dbus-* or qdbus) for various system information and configuration tasks (including modern desktop environments such as KDE ≥4)Many, many things can be done by invoking a command with the right arguments or piped input. This is a very powerful approach — better have one tool per task that does it well, than a single program that does everything but badly — but it does have its limitations. A major limitation of unix shells, and I suspect this is what you're after with your “object-oriented scripting” requirement, is that they are not good at retaining information from one command to the next, or combining commands in ways fancier than a pipeline. In particular, inter-program communication is text-based, so applications can only be combined if they serialize their data in a compatible way. This is both a blessing and a curse: the everything-is-text approach makes it easy to accomplish simple tasks quickly, but raises the barrier for more complex tasks. Interactive usability also runs rather against program maintainability. Interactive programs should be short, require little quoting, not bother you with variable declarations or typing, etc. Maintainable programs should be readable (so not have many abbreviations), should be readable (so you don't have to wonder whether a bare word is a string, a function name, a variable name, etc.), should have consistency checks such as variable declarations and typing, etc. In summary, a shell is a difficult compromise to reach. Ok, this ends the rant section, on to the examples.The Perl Shell (psh) “combines the interactive nature of a Unix shell with the power of Perl”. Simple commands (even pipelines) can be entered in shell syntax; everything else is Perl. The project hasn't been in development for a long time. It's usable, but hasn't reached the point where I'd consider using it over pure Perl (for scripting) or pure shell (interactively or for scripting). IPython is an improved interactive Python console, particularly targetted at numerical and parallel computing. This is a relatively young project. irb (interactive ruby) is the Ruby equivalent of the Python console. scsh is a scheme implementation (i.e. a decent programming language) with the kind of system bindings traditionally found in unix shells (strings, processes, files). It doesn't aim to be usable as an interactive shell however. zsh is an improved interactive shell. Its strong point is interactivity (command line edition, completion, common tasks accomplished with terse but cryptic syntax). Its programming features aren't that great (on par with ksh), but it comes with a number of libraries for terminal control, regexps, networking, etc. fish is a clean start at a unix-style shell. It doesn't have better programming or system access features. Because it breaks compatibility with sh, it has more room to evolve better features, but that hasn't happened.Addendum: another part of the unix toolbox is treating many things as files:Most hardware devices are accessible as files. Under Linux, /sys provides more hardware and system control. On many unix variants, process control can be done through the /proc filesystem. FUSE makes it easy to write new filesystems. There are already existing filesystems for converting file formats on the fly, accessing files over various network protocols, looking inside archives, etc.Maybe the future of unix shells is not better system access through commands (and better control structures to combine commands) but better system access through filesystems (which combine somewhat differently — I don't think we've worked out what the key idioms (like the shell pipe) are yet).
Preface: I love bash and have no intention of starting any sort of argument or holy-war, and hopefully this is not an extremely naive question. This question is somewhat related to this post on superuser, but I don't think the OP really knew what he was asking for. I use bash on FreeBSD, linux, OS X, and cygwin on Windows. I've also had extensive experience recently with PowerShell on Windows. Is there a shell for *nix, already available or in the works, that is compatible with bash but adds a layer of object-oriented scripting into the mix? The only thing I know of that comes close is the python console, but as far as I can tell it doesn't provide access to the standard shell environment. For example, I can't just cd ~ and ls, then chmod +x file inside the python console. I would have to use python to perform those tasks rather than the standard unix binaries, or call the binaries using python code. Does such a shell exist?
Object-oriented shell for *nix
Originally you had just dumb terminals - at first actually teletypewriters (similar to an electric typewriter, but with a roll of paper) (hence /dev/tty - TeleTYpers), but later screen+keyboard-combos - which just sent a key-code to the computer and the computer sent back a command that wrote the letter on the terminal (i.e. the terminal was without local echo, the computer had to order the terminal to write what the user typed on the terminal) - this is one of the reason why so many important Unix-commands are so short. Most terminals were connected by serial-lines, but (at least) one was directly connected to the computer (often the same room) - this was the console. Only a select few users were trusted to work on "the console" (this was often the only "terminal" available in single-user mode). Later there also were some graphical terminals (so-called "xterminals", not to be confused with the xterm-program) with screen & graphical screen-card, keyboard, mouse and a simple processor; which could just run an X-server. They did not do any computations themselves, so the X-clients ran on the computer they were connected to. Some had hard disks, but they could also boot over the network. They were popular in the early 1990s, before PCs became so cheap and powerful. Later still, there were "smart" or "intelligent" terminals. Smart terminals have the ability to process user input (line-editing at the shell prompt like inserting characters, removing words with Ctrl-W, removing letters with Ctrl-H or Backspace) without help from the computer. The earlier dumb terminals, on the other hand, could not perform such onsite line-editing. On a dumb terminal, when the user presses a key, the terminal sends/delegates the resulting key-code to the computer to handle. After handling it, the computer sends the result back to the dumb terminal to display (e.g. pressing Ctrl-W would send a key-code to the computer, the computer would interpret that to mean "delete the last word", so the computer would handle that text change, then simply give the dumb terminal the output it should display). A "terminal emulator" – the "terminal-window" you open with programs such as xterm or konsole – tries to mimic the functionality of such smarter terminals. Also programs such as PuTTY (Windows) emulate these smart terminal emulators. With the PC, where "the console" (keyboard+screen) and "the computer" is more of a single unit, you got "virtual terminals" (on Linux, keys Alt+F1 through Alt+F6) instead, but these too mimic old-style terminals. Of course, with Unix/Linux becoming more of a desktop operating system often used by a single user, you now do most of your work "at the console", where users before used terminals connected by serial-lines.It's of course the shell that starts programs. And it uses the fork system-call (C language) to make a copy of itself with a environment-settings, then the exec system-call is used to turn this copy into the command you wanted to run. The shell suspends (unless the command is run in the background) until the command completes. As the command inherits the settings for stdin, stdout and stderr from the shell, the command will write to the terminal's screen and receive input from the terminal's keyboard.
If you fire up a terminal and call an executable (assuming one that's line oriented for simplicity) you get a reply to the command from the executable. How does this get printed to you (the user)? Does the terminal do something like pexpect? (poll waiting for output) or what? How does it get notified of output to be printed out? And how does a terminal start a program? (Is it something akin to python's os.fork()? ) I'm puzzled how a terminal works, I've been playing with some terminal emulator and I still don't get how all this magic works. I'm looking at the source of konsole (kde) and yakuake (possibly uses konsole) an I can't get where all that magic happens.
How does a Linux terminal work?
A UNIX system consists of several parts, or layers as I'd like to call them. To start a system, a program called the boot loader lives at the first sector of a hard disk partition. It is started by the system, and in turn it locates the Operating System kernel, and load it. LayeringThe Kernel. This is the central program which is started by the boot loader. It does the basic hardware interaction for the system (disk, memory, video, sound) and offers a virtual environment in which it can start programs. The kernel also ships all drivers which deal with all the little differences between hardware devices. To the outside world (the higher layers), each class of devices appear to behave exactly in the same consistent way - which in turn, the programs can build upon. Background subsystems. There are just regular programs, which just stay out of your way. They handle things like remote login, provide a cental message bus, and do actions based on hardware/network events. For example, bluetooth discovery, wifi management, etc.. Any network services (file server, print server, web server) also live at this level. In UNIX systems, these are all just normal programs. The command line tools. These are all little programs which can be started to do things like text editing, downloading files, or administrating the system. At this point, a UNIX system is fully usable for system adminstrators. In Windows, this layer doesn't really exist anymore. The graphical user interface. These are also just programs, the only difference is they draw windows at the screen instead of writing text. This makes the system easier to use for regular users.Any service or event will go from the bottom all up to the top. Libraries - the common platform Programs do a lot of common things like displaying a window, drawing stuff at the screen or downloading a file. These things are the same for multiple programs, hence that code are put in separate "library" files (.so files - meaning shared object). The library can be shared across all programs. For every imaginable thing, there is a library. There is one for reading/writing PNG files. There is one for JPEG files, for reading XML, for encryption, for video playback, and so on. On Linux, the common libraries for application developers are Qt and Gtk. These libraries use lower-level libraries internally for their specific needs, while exposing their functionality in a nice consistent and concise way for application developers to create applications even faster. Libraries provide the application platform, on which programmers can build end user applications for an Operating System. The more high quality libraries a system provides, the fewer code a programmer has to write to make a beautiful program. Some libraries can be used across different operating systems (for instance, Qt is), some are really specifically tied into one operating system. This will restrict your program to be able to run at that platform only. Inter process communication A third corner piece of an operating system, is the way programs can communicate with each other. These are Inter Process Communication (IPC) machanisms. These exist in several flavors, e.g. a piece of shared memory, or a small channel is set up between two programs to exchange data. There is also a central message bus on which each program can post a message, and receive a response. This is used for global communication, where it's unknown which program can respond. From libraries to Operating Systems With libraries, IPC and the kernel in place, programmers can build all kinds of applications for system services, user administration, configuration, administration, office work, entertainment, etc.. This forms the complete suite which novice users recognize as the "operating system". In UNIX/Linux systems, all services are just programs. All system admin tools are just programs. They all do their job, and they can be chained together. I've summarized a lot of major programs at http://codingdomain.com/linux/sysadmin/Distinguishable parts with Windows UNIX is mainly a system of programs, files and restricted permissions. A lot of complexities are avoided, making it a powerful system while it looks like it has an easy job doing it. In detail, these are principles which can be found across UNIX/Linux systems:There are uniform ways to access information. ("Everything is just a file"). You can open a file, network socket, IPC channel, kernel parameters and block device as a file. Hence the appearance of the virtual filesystems in /dev, /sys and /proc. The only API you ever need is open, read and close. The underlying system is transparent. Every program operates under the same rules. Unlike Windows, there is no artificial difference between a "console program", "gui program" or "background service". They are all just programs, that happen to do different things. They can also all be observed, analyzed and debugged in the same way. Settings are readable, editable, and can be annotated with comments. They typically have an INI-style format, but may use a custom format for the needs of that application. Because they are just files, they can be copied to other systems, archived or being backuped with standard tools. No large "do it all in once" applications. The mantra is "do one thing, do it well". Command line tools can be chained and together be powerful. Separate services (e.g. SMTP, IMAP and POP, and login) are separate subprograms, avoiding complex intertwined code and security issues. Complex desktop environments delegate hard work to individual programs. fork(). New programs are started by an existing program cloning itself. The clone sets up everything (e.g. file handles), and optionally replaces itself with the new program code. This makes it really easy to apply the same security settings and restrictions to new programs, share memory or setup an IPC mechanism. The cost of starting a process is also very low. The file system is one tree, in which other disk partitions and network shares can be mounted. There is again, an universal way of accessing data. Common system locations (e.g. /usr can easily be mounted as network share. The system is built for low user privileges. After login, every user (except root) is confined their own resources, running applications and files only. Network services reduce their privileges as soon as possible. There is a single clear way to get more privileges, or ask someone to execute a privileged job on their behalf. Every other call is limited by the restrictions and limitations of the program. Every program stores settings in a hidden file/folder of the user home directory. No program ever attempts to write a global setting file. A favor towards openly described communication mechanisms over secret mechanisms or specific 1-to-1 mechanisms. Other vendors and software developers are encouraged to follow the same specification, so things can easily be connected, swapped out and yet stay loosely coupled.
I would like to know how the OS works in a nutshell:The basic components it's built upon How those components work together What makes unix UNIX What makes it so different from other OSs like Windows
How does a unix or linux system work? [closed]
Summary: you're correct that receiving a signal is not transparent, neither in case i (interrupted without having read anything) nor in case ii (interrupted after a partial read). To do otherwise in case i would require making fundamental changes both to the architecture of the operating system and the architecture of applications. The OS implementation view Consider what happens if a system call is interrupted by a signal. The signal handler will execute user-mode code. But the syscall handler is kernel code and does not trust any user-mode code. So let's explore the choices for the syscall handler:Terminate the system call; report how much was done to the user code. It's up to the application code to restart the system call in some way, if desired. That's how unix works. Save the state of the system call, and allow the user code to resume the call. This is problematic for several reasons:While the user code is running, something could happen to invalidate the saved state. For example, if reading from a file, the file might be truncated. So the kernel code would need a lot of logic to handle these cases. The saved state can't be allowed to keep any lock, because there's no guarantee that the user code will ever resume the syscall, and then the lock would be held forever. The kernel must expose new interfaces to resume or cancel ongoing syscalls, in addition to the normal interface to start a syscall. This is a lot of complication for a rare case. The saved state would need to use resources (memory, at least); those resources would need to be allocated and held by the kernel but be counted against the process's allotment. This isn't insurmountable, but it is a complication.Note that the signal handler might make system calls that themselves get interrupted; so you can't just have a static resource allotment that covers all possible syscalls. And what if the resources cannot be allocated? Then the syscall would have to fail anyway. Which means the application would need to have code to handle this case, so this design would not simplify the application code.Remain in progress (but suspended), create a new thread for the signal handler. This, again, is problematic:Early unix implementations had a single thread per process. The signal handler would risk overstepping on the syscall's shoes. This is an issue anyway, but in the current unix design, it's contained. Resources would need to be allocated for the new thread; see above.The main difference with an interrupt is that the interrupt code is trusted, and highly constrained. It's usually not allowed to allocate resources, or run forever, or take locks and not release them, or do any other kind of nasty things; since the interrupt handler is written by the OS implementer himself, he knows that it won't do anything bad. On the other hand, application code can do anything. The application design view When an application is interrupted in the middle of a system call, should the syscall continue to completion? Not always. For example, consider a program like a shell that's reading a line from the terminal, and the user presses Ctrl+C, triggering SIGINT. The read must not complete, that's what the signal is all about. Note that this example shows that the read syscall must be interruptible even if no byte has been read yet. So there must be a way for the application to tell the kernel to cancel the system call. Under the unix design, that happens automatically: the signal makes the syscall return. Other designs would require a way for the application to resume or cancel the syscall at its leasure. The read system call is the way it is because it's the primitive that makes sense, given the general design of the operating system. What it means is, roughly, “read as much as you can, up to a limit (the buffer size), but stop if something else happens”. To actually read a full buffer involves running read in a loop until as many bytes as possible have been read; this is a higher-level function, fread(3). Unlike read(2) which is a system call, fread is a library function, implemented in user space on top of read. It's suitable for an application that reads for a file or dies trying; it's not suitable for a command line interpreter or for a networked program that must throttle connections cleanly, nor for a networked program that has concurrent connections and doesn't use threads. The example of read in a loop is provided in Robert Love's Linux System Programming: ssize_t ret; while (len != 0 && (ret = read (fd, buf, len)) != 0) { if (ret == -1) { if (errno == EINTR) continue; perror ("read"); break; } len -= ret; buf += ret; }It takes care of case i and case ii and few more.
From reading the man pages on the read() and write() calls it appears that these calls get interrupted by signals regardless of whether they have to block or not. In particular, assume a process establishes a handler for some signal. a device is opened (say, a terminal) with the O_NONBLOCK not set (i.e. operating in blocking mode) the process then makes a read() system call to read from the device and as a result executes a kernel control path in kernel-space. while the precess is executing its read() in kernel-space, the signal for which the handler was installed earlier is delivered to that process and its signal handler is invoked.Reading the man pages and the appropriate sections in SUSv3 'System Interfaces volume (XSH)', one finds that: i. If a read() is interrupted by a signal before it reads any data (i.e. it had to block because no data was available), it returns -1 with errno set to [EINTR]. ii. If a read() is interrupted by a signal after it has successfully read some data (i.e. it was possible to start servicing the request immediately), it returns the number of bytes read. Question A): Am I correct to assume that in either case (block/no block) the delivery and handling of the signal is not entirely transparent to the read()? Case i. seems understandable since the blocking read() would normally place the process in the TASK_INTERRUPTIBLE state so that when a signal is delivered, the kernel places the process into TASK_RUNNING state. However when the read() doesn't need to block (case ii.) and is processing the request in kernel-space, I would have thought that the arrival of a signal and its handling would be transparent much like the arrival and proper handling of a HW interrupt would be. In particular I would have assumed that upon delivery of the signal, the process would be temporarily placed into user mode to execute its signal handler from which it would return eventually to finish off processing the interrupted read() (in kernel-space) so that the read() runs its course to completion after which the process returns back to the point just after the call to read() (in user-space), with all of the available bytes read as a result. But ii. seems to imply that the read() is interrupted, since data is available immediately, but it returns returns only some of the data (instead of all). This brings me to my second (and final) question: Question B): If my assumption under A) is correct, why does the read() get interrupted, even though it does not need to block because there is data available to satisfy the request immediately? In other words, why is the read() not resumed after executing the signal handler, eventually resulting in all of the available data (which was available after all) to be returned?
Interruption of system calls when a signal is caught
From your list, it looks like you just had the 32-bit packages used for Wine. Wine needs a bunch of 32-bit libraries to run 32-bit Windows applications. You won't be able to remove the i386 architecture unless you uninstall the 32-bit Wine. But there's no point in doing this: there's nothing wrong with having the i386 architecture enabled.
I used this command to add i386 arch: sudo dpkg --add-architecture i386And then immediately after without installing any packages I tried to remove the i386 arch like so: sudo dpkg --remove-architecture i386And i got the error: dpkg: error: cannot remove architecture 'i386' currently in use by the databaseSolutions I have seen so far involve removing i386 packages, I haven't installed any, the ones that are installed are vital to the functioning of the OS. What do I do? EDIT, PLEASE READ THE FOLLOWING TO AVOID DESTROYING YOUR OS: Turns out that 64-bit Linux OSes already include the i386 arch, so the command sudo dpkg --add-architecture i386 didn't really do anything.
dpkg: error: cannot remove architecture 'i386' currently in use by the database
First, why there are separate /lib and /lib64: The Filesystem Hierarchy Standard mentions that separate /lib and /lib64 exist because:10.1. There may be one or more variants of the /lib directory on systems which support more than one binary format requiring separate libraries. (...) This is commonly used for 64-bit or 32-bit support on systems which support multiple binary formats, but require libraries of the same name. In this case, /lib32 and /lib64 might be the library directories, and /lib a symlink to one of them.On my Slackware 14.2 for example there are /lib and /lib64 directories for 32-bit and 64-bit libraries respectively even though /lib is not as a symlink as the FHS snippet would suggest: $ ls -l /lib/libc.so.6 lrwxrwxrwx 1 root root 12 Aug 11 2016 /lib/libc.so.6 -> libc-2.23.so $ ls -l /lib64/libc.so.6 lrwxrwxrwx 1 root root 12 Aug 11 2016 /lib64/libc.so.6 -> libc-2.23.soThere are two libc.so.6 libraries in /lib and /lib64. Each dynamically built ELF binary contains a hardcoded path to the interpreter, in this case either /lib/ld-linux.so.2 or /lib64/ld-linux-x86-64.so.2: $ file main main: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, not stripped $ readelf -a main | grep 'Requesting program interpreter' [Requesting program interpreter: /lib/ld-linux.so.2]$ file ./main64 ./main64: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, not stripped $ readelf -a main64 | grep 'Requesting program interpreter' [Requesting program interpreter: /lib64/ld-linux-x86-64.so.2]The job of the interpreter is to load necessary shared libraries. You can ask a GNU interpreter what libraries it would load without even running a binary using LD_TRACE_LOADED_OBJECTS=1 or a ldd wrapper: $ LD_TRACE_LOADED_OBJECTS=1 ./main linux-gate.so.1 (0xf77a9000) libc.so.6 => /lib/libc.so.6 (0xf760e000) /lib/ld-linux.so.2 (0xf77aa000) $ LD_TRACE_LOADED_OBJECTS=1 ./main64 linux-vdso.so.1 (0x00007ffd535b3000) libc.so.6 => /lib64/libc.so.6 (0x00007f56830b3000) /lib64/ld-linux-x86-64.so.2 (0x00007f568347c000)As you can see a given interpreter knows exactly where to look for libraries - 32-bit version looks for libraries in /lib and 64-bit version looks for libraries in /lib64. FHS standard says the following about /bin:/bin contains commands that may be used by both the system administrator and by users, but which are required when no other filesystems are mounted (e.g. in single user mode). It may also contain commands which are used indirectly by scripts.IMO the reason why there are no separate /bin and /bin64 is that if we had the file with the same name in both of these directories we couldn't call one of them indirectly because we'd have to put /bin or /bin64 first in $PATH. However, notice that the above is just the convention - the Linux kernel does not really care if you have separate /bin and /bin64. If you want them, you can create them and setup your system accordingly. You also mentioned Android - note that except for running a modified Linux kernel it has nothing to do with GNU systems such as Ubuntu - no glibc, no bash (by default, you can of course compile and deploy it manually), and also directory structure is completely different.
In my laptop: $ cat /etc/issue Ubuntu 18.04 LTS \n \lThere are two different folders for libraries x86 and x86_64: ~$ ls -1 / bin lib lib64 sbin ...Why for binaries exists only one directory? P.S. I'm also interested in Android but I hope that answer should be the same.
Why there are `/lib` and `/lib64` but only `/bin`?
The internal structure of directories is dependent on the filesystem in use. If you want to know precisely what happens, have a look at filesystem implementations. Basically, in most filesystems, a directory is an associative array between filenames (keys) and inodes numbers (values). Something like this¹: 1167010 . 1158721 .. 1167626 subdir 132651 barfile 132650 bazfileThis list is coded in some – more or less – efficient way inside a chain of (usually) 4KB blocks. Notice that the content of regular files is stored similarly. In the case of directories, there is no point in knowing which size is actually used inside these blocks. That's why the sizes of directories reported by du are multiples of 4KB. Inodes are there to tie blocks together, forming a single entity, namely a 'file' in the general sense. They are identified by a number which is some kind of address and each one is usually stored as a single, special block. Management of all this happens in kernel mode. Software just asks for the creation of a directory with a function named int mkdir(const char *pathname, mode_t mode); leading to a system call, and all the rest is performed behind the scenes. About links structure: A hard link is not a file, it's just a new directory entry (i.e. a name – inode number association) referring to a preexisting inode entity². This means that the same inode can be accessed from different pathnames. In particular, since metadatas (permissions, ownership, timestamps…) are stored within the inode, these are unique and independent of the pathname chosen to access the file. A symbolic link is a file and it's distinct from its target. This means that it has its own inode. It used to be handled just like a regular file: the target path was stored in a data block. But now, for efficiency reasons in recent ext filesystems, paths shorter than 60 bytes long are stored within the inode itself (using the fields which would normally be used to store the pointers to data blocks). — 1. this was obtained using ls -ai1 testdir. 2. whose type must be different than 'directory' nowadays.
My question is how directories are implemented? I can believe a data structure like a variable e.g. table, array or similar. Since UNIX is Open Source I can look in the source what the program does when it created a new directory. Can you tell me where to look or elaborate on the topic? That a directory "is" a file I could understand and is a directory really a file? I'm not sure that it is true that files are stored "in" files while still in way you could say the word file about nearly anything and I'm not sure what absolutely not is a file since you could call even a variable a file. For example a link is certainly not a file and a link is like a directory but then this violates that a directory is a file?
How are directories implemented in Unix filesystems?
About your performance question, pipes are more efficient than files because no disk IO is needed. So cmd1 | cmd2 is more efficient than cmd1 > tmpfile; cmd2 < tmpfile (this might not be true if tmpfile is backed on a RAM disk or other memory device as named pipe; but if it is a named pipe, cmd1 should be run in the background as its output can block if the pipe becomes full). If you need the result of cmd1 and still need to send its output to cmd2, you should cmd1 | tee tmpfile | cmd2 which will allow cmd1 and cmd2 to run in parallel avoiding disk read operations from cmd2. Named pipes are useful if many processes read/write to the same pipe. They can also be useful when a program is not designed to use stdin/stdout for its IO needing to use files. I put files in italic because named pipes are not exactly files in a storage point of view as they reside in memory and have a fixed buffer size, even if they have a filesystem entry (for reference purpose). Other things in UNIX have filesystem entries without being files: just think of /dev/null or others entries in /dev or /proc. As pipes (named and unnamed) have a fixed buffer size, read/write operations to them can block, causing the reading/writing process to go in IOWait state. Also, when do you receive an EOF when reading from a memory buffer ? Rules on this behavior are well defined and can be found in the man. One thing you cannot do with pipes (named and unnamed) is seek back in the data. As they are implemented using a memory buffer, this is understandable. About "everything in Linux/Unix is a file", I do not agree. Named pipes have filesystem entries, but are not exactly file. Unnamed pipes do not have filesystem entries (except maybe in /proc). However, most IO operations on UNIX are done using read/write function that need a file descriptor, including unnamed pipe (and socket). I do not think that we can say that "everything in Linux/Unix is a file", but we can surely say that "most IO in Linux/Unix is done using a file descriptor".
When I just used pipe in bash, I didn't think more about this. But when I read some C code example using system call pipe() together with fork(), I wonder how to understand pipes, including both anonymous pipes and named pipes. It is often heard that "everything in Linux/Unix is a file". I wonder if a pipe is actually a file so that one part it connects writes to the pipe file, and the other part reads from the pipe file? If yes, where is the pipe file for an anonymous pipe created? In /tmp, /dev, or ...? However, from examples of named pipes, I also learned that using pipes has space and time performance advantage over explicitly using temporary files, probably because there are no files involved in implementation of pipes. Also pipes seem not store data as files do. So I doubt a pipe is actually a file.
How to understand pipes
That entirely depends on what services you want to have on your device. Programs You can make Linux boot directly into a shell. It isn't very useful in production —who'd just want to have a shell sitting there —but it's useful as an intervention mechanism when you have an interactive bootloader: pass init=/bin/sh to the kernel command line. All Linux systems (and all unix systems) have a Bourne/POSIX-style shell in /bin/sh. You'll need a set of shell utilities. BusyBox is a very common choice; it contains a shell and common utilities for file and text manipulation (cp, grep, …), networking setup (ping, ifconfig, …), process manipulation (ps, nice, …), and various other system tools (fdisk, mount, syslogd, …). BusyBox is extremely configurable: you can select which tools you want and even individual features at compile time, to get the right size/functionality compromise for your application. Apart from sh, the bare minimum that you can't really do anything without is mount, umount and halt, but it would be atypical to not have also cat, cp, mv, rm, mkdir, rmdir, ps, sync and a few more. BusyBox installs as a single binary called busybox, with a symbolic link for each utility. The first process on a normal unix system is called init. Its job is to start other services. BusyBox contains an init system. In addition to the init binary (usually located in /sbin), you'll need its configuration files (usually called /etc/inittab — some modern init replacement do away with that file but you won't find them on a small embedded system) that indicate what services to start and when. For BusyBox, /etc/inittab is optional; if it's missing, you get a root shell on the console and the script /etc/init.d/rcS (default location) is executed at boot time. That's all you need, beyond of course the programs that make your device do something useful. For example, on my home router running an OpenWrt variant, the only programs are BusyBox, nvram (to read and change settings in NVRAM), and networking utilities. Unless all your executables are statically linked, you will need the dynamic loader (ld.so, which may be called by different names depending on the choice of libc and on the processor architectures) and all the dynamic libraries (/lib/lib*.so, perhaps some of these in /usr/lib) required by these executables. Directory structure The Filesystem Hierarchy Standard describes the common directory structure of Linux systems. It is geared towards desktop and server installations: a lot of it can be omitted on an embedded system. Here is a typical minimum./bin: executable programs (some may be in /usr/bin instead). /dev: device nodes (see below) /etc: configuration files /lib: shared libraries, including the dynamic loader (unless all executables are statically linked) /proc: mount point for the proc filesystem /sbin: executable programs. The distinction with /bin is that /sbin is for programs that are only useful to the system administrator, but this distinction isn't meaningful on embedded devices. You can make /sbin a symbolic link to /bin. /mnt: handy to have on read-only root filesystems as a scratch mount point during maintenance /sys: mount point for the sysfs filesystem /tmp: location for temporary files (often a tmpfs mount) /usr: contains subdirectories bin, lib and sbin. /usr exists for extra files that are not on the root filesystem. If you don't have that, you can make /usr a symbolic link to the root directory.Device files Here are some typical entries in a minimal /dev:console full (writing to it always reports “no space left on device”) log (a socket that programs use to send log entries), if you have a syslogd daemon (such as BusyBox's) reading from it null (acts like a file that's always empty) ptmx and a pts directory, if you want to use pseudo-terminals (i.e. any terminal other than the console) —e.g. if the device is networked and you want to telnet or ssh in random (returns random bytes, risks blocking) tty (always designates the program's terminal) urandom (returns random bytes, never blocks but may be non-random on a freshly-booted device) zero (contains an infinite sequence of null bytes)Beyond that you'll need entries for your hardware (except network interfaces, these don't get entries in /dev): serial ports, storage, etc. For embedded devices, you would normally create the device entries directly on the root filesystem. High-end systems have a script called MAKEDEV to create /dev entries, but on an embedded system the script is often not bundled into the image. If some hardware can be hotplugged (e.g. if the device has a USB host port), then /dev should be managed by udev (you may still have a minimal set on the root filesystem). Boot-time actions Beyond the root filesystem, you need to mount a few more for normal operation:procfs on /proc (pretty much indispensible) sysfs on /sys (pretty much indispensible) tmpfs filesystem on /tmp (to allow programs to create temporary files that will be in RAM, rather than on the root filesystem which may be in flash or read-only) tmpfs, devfs or devtmpfs on /dev if dynamic (see udev in “Device files” above) devpts on /dev/pts if you want to use [pseudo-terminals (see the remark about pts above)You can make an /etc/fstab file and call mount -a, or run mount manually. Start a syslog daemon (as well as klogd for kernel logs, if the syslogd program doesn't take care of it), if you have any place to write logs to. After this, the device is ready to start application-specific services. How to make a root filesystem This is a long and diverse story, so all I'll do here is give a few pointers. The root filesystem may be kept in RAM (loaded from a (usually compressed) image in ROM or flash), or on a disk-based filesystem (stored in ROM or flash), or loaded from the network (often over TFTP) if applicable. If the root filesystem is in RAM, make it the initramfs — a RAM filesystem whose content is created at boot time. Many frameworks exist for assembling root images for embedded systems. There are a few pointers in the BusyBox FAQ. Buildroot is a popular one, allowing you to build a whole root image with a setup similar to the Linux kernel and BusyBox. OpenEmbedded is another such framework. Wikipedia has an (incomplete) list of popular embedded Linux distributions. An example of embedded Linux you may have near you is the OpenWrt family of operating systems for network appliances (popular on tinkerers' home routers). If you want to learn by experience, you can try Linux from Scratch, but it's geared towards desktop systems for hobbyists rather than towards embedded devices. A note on Linux vs Linux kernel The only behavior that's baked into the Linux kernel is that the first program that's launched at boot time. (I won't get into initrd and initramfs subtleties here.) This program, traditionally called init, has process ID 1 and has certain privileges (immunity to KILL signals) and responsibilities (reaping orphans). You can run a system with a Linux kernel and start whatever you want as the first process, but then what you have is an operating system based on the Linux kernel, and not what is normally called “Linux” —Linux, in the common sense of the term, is a Unix-like operating system whose kernel is the Linux kernel. For example, Android is an operating system which is not Unix-like but based on the Linux kernel.
It's a question about user space applications, but hear me out! Three "applications", so to speak, are required to boot a functional distribution of Linux:Bootloader - For embedded typically that's U-Boot, although not a hard requirement. Kernel - That's pretty straightforward. Root Filesystem - Can't boot to a shell without it. Contains the filesystem the kernel boots to, and where init is called form.My question is in regard to #3. If someone wanted to build an extremely minimal rootfs (for this question let's say no GUI, shell only), what files/programs are required to boot to a shell?
What are the minimum root filesystem applications that are required to fully boot linux?
You can determine the nature of an executable in Unix using the file command and the type command. type You use type to determine an executable's location on disk like so: $ type -a ls ls is /usr/bin/ls ls is /bin/lsSo I now know that ls is located here on my system in 2 locations:/usr/bin/ls & /bin/ls. Looking at those executables I can see they're identical: $ ls -l /usr/bin/ls /bin/ls -rwxr-xr-x. 1 root root 120232 Jan 20 05:11 /bin/ls -rwxr-xr-x. 1 root root 120232 Jan 20 05:11 /usr/bin/lsNOTE: You can confirm they're identical beyond their sizes by using cmp or diff. with diff $ diff -s /usr/bin/ls /bin/ls Files /usr/bin/ls and /bin/ls are identicalwith cmp $ cmp /usr/bin/ls /bin/ls $ Using file If I query them using the file command: $ file /usr/bin/ls /bin/ls /usr/bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0x303f40e1c9349c4ec83e1f99c511640d48e3670f, stripped /bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0x303f40e1c9349c4ec83e1f99c511640d48e3670f, strippedSo these would be actual physical programs that have been compiled from C/C++. If they were shell scripts they'd typically present like this to file: $ file somescript.bash somescript.bash: POSIX shell script, ASCII text executableWhat's ELF? ELF is a file format, it is the output of a compiler such as gcc, which is used to compile C/C++ programs such as ls.In computing, the Executable and Linkable Format (ELF, formerly called Extensible Linking Format) is a common standard file format for executables, object code, shared libraries, and core dumps.It typically will have one of the following extensions in the filename: none, .o, .so, .elf, .prx, .puff, .bin
I have some doubts regarding *nix.I don't know which type of executable file is ls, whether it is .sh or .ksh or any other kind of system executable if it is, what is that? when I tried to see what is the source code of ls command looks like, it shows something unreadable, what method does *nix use to create these types of unreadable files and can I make my files similar to these files (like ls - unreadable).
How are system commands like ls created?
You might know the normal read, write and execute permissions for files in unix. However, in many applications, this type of permission structure--e.g. giving a given user either full permission to read a given file, or no permission at all to read the file--is too coarse. For this reason, Unix includes another permission bit, the set-user-ID bit. If this bit is set for an executable file, then whenever a user other than the owner executes the file, that user acquires all the file read/write/execute privileges of the owner in accessing any of the owner's other files! To set the set-user-ID bit for a file, type chmod u+s filenameMake sure that you have set group-other execute permission too; it would be nice to have group-other read permission as well. All of this can be done with the single statement chmod 4755 filenameIt is also referred to as Saved UID. A file that is launched that has a Set-UID bit on, the saved UID will be the UID of the owner of the file. Otherwise, saved UID will be the Real UID. What is effective uid ? This UID is used to evaluate privileges of the process to perform a particular action. EUID can be changed either to Real UID, or Superuser UID if EUID!=0. If EUID=0, it can be changed to anything. Example An example of such program is passwd. If you list it in full, you will see that it has Set-UID bit and the owner is "root". When a normal user, say "mtk", runs passwd, it starts with: Real-UID = mtk Effective-UID = mtk Saved-UID = rootReference link 1 Reference link 2
Can someone please explain the set-user-ID mechanism in Unix ? What was the rationale behind this design decision? How is it different from effective user id mechanism ?
How does the set-user-ID mechanism work in Unix?
No, kernels from different implementations of Unix-style operating systems are not interchangeable, notably because they all present different interfaces to the rest of the system (user space) — their system calls (including ioctl specifics), the various virtual file systems they use... What is interchangeable to some extent, at the source level, is the combination of the kernel and the C library, or rather, the user-level APIs that the kernel and libraries expose (essentially, the view at the layer described by POSIX, without considering whether it is actually POSIX). Examples of this include Debian GNU/kFreeBSD, which builds a Debian system on top of a FreeBSD kernel, and Debian GNU/Hurd, which builds a Debian system on top of the Hurd. This isn’t quite at the level of kernel interchangeability, but there have been attempts to standardise a common application binary interface, to allow binaries to be used on various systems without needing recompilation. One example is the Intel Binary Compatibility Standard, which allows binaries conforming to it to run on any Unix system implementing it, including older versions of Linux with the iBCS 2 layer. I used this in the late 90s to run WordPerfect on Linux. See also How to build a FreeBSD chroot inside of Linux.
Can I take a Linux kernel and use it with, say, FreeBSD and vice versa (FreeBSD kernel in, say, a Debian)? Is there a universal answer? What are the limitations? What are the obstructions?
Are different Linux/Unix kernels interchangeable?
Here are the answers to your questions:I'd view it as a graphical image rather than an ASCII image. $ lstopo --output-format png -v --no-io > cpu.pngNOTE: You can view the generated file cpu.png"PU P#" = Processing Unit Processor #. These are processing elements within the cores of the CPU. On my laptop (Intel i5) I have 2 cores that each have 2 processing elements, for a total of 4. But in actuality I have only 2 physical cores. L#i = Instruction Cache, L#d = Data Cache. L1 = a Level 1 cache. In the Intel architectures the instruction & data get mixed as you move down from L1 → L2 → L3. "Socket P#" is that there are 2 physical sockets on the motherboard, there are 2 physically discrete CPUs in this setup. In multiple CPU architectures the RAM is usually split so that a portion of it is assigned to each core. If CPU0 needs data from CPU1's RAM, then it needs to "request" this data through CPU1. There are a number of reasons why this is done, too many to elaborate here. Read up on NUMA style memory architectures if you're really curious.The drawing is showing 4 cores (with 1 Processing Unit in each) that are in 2 physical CPU packages. Each physical CPU has "isolated" access to 16 GB of RAM. No, there is no shared memory among all the CPUs. The 2 CPUs have to interact with the other's RAM through the CPU. Again see the NUMA Wikipage for more on the Non Uniform Memory Architecture. Yes, the system has a total of 32 GB of RAM. But only 1/2 of the RAM is accessible by either physical CPU directly.What's a socket? A socket is the term used to describe the actual package that a CPU is contained inside of, for mounting on the motherboard. There are many different styles and configurations; check out the Wikipedia page on CPU Sockets.This picture also kind of illustrates the relationships between the "cores", the CPUs, and the "sockets".
I have a output from lstopo --output-format txt -v --no-io > lstopo.txt for a 8-core node in a cluster, which is https://dl.dropboxusercontent.com/u/13029929/lstopo.txtThe file is a text drawing of the node. It is too wide for both the terminal and gedit on Ubuntu of my laptop, and some of its right is moved by my laptop to the left and overlap the left part of the drawing. I wonder how I can view the file properly? ( Added: I realize that I can view the drawing properly by uploading to dropbox and opening in Firefox, which zoom out the drawing properly. But open the local file in Firefox will mis-display the dash lines "-", and I wonder why? Other than Firefox, any software can also work on it?) what does "PU P#" mean in each core "Core P#"? Why are their numbers not the same? Does "L1i" mean a L1 instruction cache, and "L1d" a L1 data cache? Why do L2 and L3 caches not have distinction between instruction cache and data cache? Is this common for computers? What does "Socket P#" mean? Is the "socket" used for connection between the L3 caches and the main memory? What does "NUMANode P# (16GB)" mean? Is it a main memory chip? Does the drawing show that there are four cores sharing a main memory chip , and the other four cores sharing another main memory chip? Is there not a main memory shared by all the 8 cores in the node? So is the node just like a distributed system with two 4-core computers without shared memory between them? How can the two 4-core groups communicate with each other? Does "Machine (32GB)" mean the sum of the sizes of the two main memory chips mentioned in 6?
Interpret the output of lstopo
Conceptually, a library function is part of your process. At run-time, your executable code and the code of any libraries (such as libc.so) it depends on, get linked into a single process. So, when you call a function in such a library, it executes as part of your process, with the same resources and privileges. It's the same idea as calling a function you wrote yourself (with possible exceptions like PLT and/or trampoline functions, which you can look up if you care). Conceptually, a system call is a special interface used to make a call from your code (which is generally unprivileged) to the kernel (which has the right to escalate privileges as necessary).For example, see the Linux man brk. When a C program calls malloc to allocate memory, it is calling a library function in glibc. If there is already enough space for the allocation inside the process, it can do any necessary heap management and return the memory to the caller. If not, glibc needs to request more memory from the kernel: it (probably) calls the brk glibc function, which in turn calls the brk syscall. Only once control has passed to the kernel, via the syscall, can the global virtual memory state be modified to reserve more memory, and map it into your process' address space.
I have been through the answer of this question but do not quite understand the difference between system calls and library functions. Conceptually, what is the difference between the two?
Difference between system calls and library functions
All modern operating systems support multitasking. This means that the system is able to execute multiple processes at the same time; either in pseudo-parallel (when only one CPU is available) or nowadays with multi-core CPUs being common in parallel (one task/core). Let's take the simpler case of only one CPU being available. This means that if you execute at the same time two different processes (let's say a web browser and a music player) the system is not really able to execute them at the same time. What happens is that the CPU is switching from one process to the other all the time; but this is happening extremely fast, thus you never notice it. Now let's assume that while those two processes are executing, you press the reset button (bad boy). The CPU will immediately stop whatever is doing and reboot the system. Congratulations: you generated an interrupt. The case is similar when you are programming and want to ask for a service from the CPU. The difference is that in this case you execute software code -- usually library procedures that are executing system calls (for example fopen for opening a file). Thus, 1 describes two different ways of getting attention from the CPU. Most modern operating systems support two execution modes: user mode and kernel mode. By default an operating system runs in user mode. User mode is very limited. For example, all I/O is forbidden; thus, you are not allowed to open a file from your hard disk. Of course this never happens in real, because when you open a file the operating system switches from user to kernel mode transparently. In kernel mode you have total control of the hardware. If you are wondering why those two modes exist, the simplest answer is for protection. Microkernel-based operating systems (for example MINIX 3) have most of their services running in user mode, which makes them less harmful. Monolithic kernels (like Linux) have almost all their services running in kernel mode. Thus a driver that crashes in MINIX 3 is unlikely to bring down the whole system, while this is not unusual in Linux. System calls are the primitive used in monolithic kernels (shared data model) for switching from user to kernel mode. Message passing is the primitive used in microkernels (client/server model). To be more precise, in a message passing system programmers also use system calls to get attention from the CPU. Message passing is visible only to the operating system developers. Monolithic kernels using system calls are faster but less reliable, while microkernels using message passing are slower but have better fault isolation. Thus, 2 mentions two different ways of switching from user to kernel mode. To revise, the most common way of creating a software interrupt, aka trap, is by executing a system call. Interrupts on the other hand are generated purely by hardware. When we interrupt the CPU (either by software or by hardware) it needs to save somewhere its current state -- the process that it executes and at which point it did stop -- otherwise it will not be able to resume the process when switching back. That is called a context switch and it makes sense: Before you switch off your computer to do something else, you first need to make sure that you saved all your programs/documents, etc so that you can resume from the point where you stopped the next time you'll turn it on :) Thus, 3 explains what needs to be done after executing a trap or an interrupt and how similar the two cases are.
I am reading the Wikipedia article for process management. My focus is on Linux. I cannot figure out the relation and differences between system call, message passing and interrupt, in their concepts and purposes. Are they all for processes to make requests to kernel for resources and services? Some quotes from the article and some other:There are two possible ways for an OS to regain control of the processor during a program’s execution in order for the OS to perform de-allocation or allocation:The process issues a system call (sometimes called a software interrupt); for example, an I/O request occurs requesting to access a file on hard disk. A hardware interrupt occurs; for example, a key was pressed on the keyboard, or a timer runs out (used in pre-emptive multitasking).There are two techniques by which a program executing in user mode can request the kernel's services: * System call * Message passingan interrupt is an asynchronous signal indicating the need for attention or a synchronous event in software indicating the need for a change in execution. A hardware interrupt causes the processor to save its state of execution and begin execution of an interrupt handler. Software interrupts are usually implemented as instructions in the instruction set, which cause a context switch to an interrupt handler similar to a hardware interrupt.
What is the relationship between system calls, message passing, and interrupts?
I checked uname manual (man uname) and it says the following for the "-a" option: print all information, in the following order, except omit -p and -i if unknownIn Ubuntu, I guess, options "-m", "-p" and "-i" (machine, processor and hardware-platform) are returning the machine architecture. For example, if you use the command uname -mpi You will see: x86_64 x86_64 x86_64On the other hand, if you choose all the option: uname -snrvmpio You will get the same result as: uname -aOutput: Linux <hostname> 3.13.0-29-generic #53-Ubuntu SMP Wed Jun 4 21:00:20 UTC 2014 x86_64 x86_64 x86_64 GNU/LinuxI also executed "uname" with options "-m", "-p" and "-i" on an ARCHLINUX distro and I got a different answer: x86_64 unknown unknownIn fact, when I asked for "uname -a" on the ARCHLINUX distro the answer was: Linux <hostname> xxxxxx-ARCH #1 SMP PREEMPT Mon Feb 14 20:40:47 CEST 2015 x86_64 GNU/LinuxWhile when executed "uname -snrvmpio" on the ARCHLINUX distro I got: Linux <hostname> xxxxxx-ARCH #1 SMP PREEMPT Mon Feb 14 20:40:47 CEST 2015 x86_64 unknown unknown GNU/Linux
$ uname -a Linux 3.13.0-29-generic #53-Ubuntu SMP Wed Jun 4 21:00:20 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Running ubuntu 12.04.1 LTS. Why does it have the architecture (x86_64) listed thrice?
Why is architecture listed thrice in uname -a?
You will notice differences certainly. Most noticable will be differences in the standard userland utilities. FreeBSD does not use GNU ls, GNU cp, and so on. For example, if you're attached to a colorized ls, you may want to alias ls to "ls -G". It does use GNU grep, though. The default shell is a much simpler and less bloated shell than GNU Bash, which is the default on most Linux distributions. If you are attached to bash, that may be one of the first packages you will want to install. The ports system has been the standard way to install software on the various BSDs. Ports downloads the source code, builds it, and then installs it. It's nearly entirely automatic. To install bash, for example, do this as root: cd /usr/ports/shells/bash && make install && make cleanIf you don't do a make clean at the end, you will leave the built source code lying in the ports tree. Many ports have pre-built packages that can be downloaded if you prefer not to waste time building it and don't need to customize it. To install bash as a package, this should do it: pkg_add -r bashYou can find most any common program in ports including Gnome 3, sudo, rsync, or what ever else you need. A great website for navigating ports is FreshPorts. You also should get familiar with the FreeBSD Handbook.
I want to install FreeBSD today on a spare HDD I have lying around. I'd like to give it a trial run, learn a few things, and if it suits me I'll replace my current Ubuntu 10.10 'server/NAS/encoding box' with it. Curiosity is the main reason. I also want to see most of the major bugs ironed out of GNOME 3/Unity before I jump aboard the next Ubuntu iteration. I have no experience with the BSDs (except for OS X) but I have installed and used quite a few Linux distros over the years. I have a fairly good understanding of how to get Linux up and running, including some of the roll-your-own distros such as Arch. But I'm not an expert by any stretch of the imagination. Basically, I'd say I'm better than my grandma is. So is there anything that I should keep in mind when installing FreeBSD for the first time? In particular, are there any major differences between installing and setting up FreeBSD and a Linux distro? Furthermore, should I be using a i386 release? I read somewhere in the documentation that i386 is recommended but I'm not sure if that's out-of-date information.
First FreeBSD install. Is there anything I should know about differences between Linux and BSD?
Basically, there aren't any architectural differences between the two distributions, except for the way they handle the init system: Guix System uses GNU Sheperd while NixOS uses System D. To the best of my understanding, Guix/Guix System is a re-implementation of the framework seen in Nix/NixOS, utilizing GNU tooling. In other words, it is like NixOS but with a different user experience:The entirety of its codebase is developed using Guile and Lisp, in contrast to Nix and Bash. It employs GNU Shepherd in lieu of System D. Guix does not package non free software while nixpkgs do. Guix provides support for the GNU Herd kernel.I tried Guix out about a year ago and found some limitations back then:Impossibility to install the root filesystem on LVM. Building a package requires to recompile all Guix modules.It is noteworthy that nixpkgs is one of the largest package repositories, whereas Guix repositories are constrained by the limited number of maintainers and the "libre software only" limitation. The Nix project is also more mature, enjoying a ten-year head start and a much larger community. Furthermore, since Nix is a package manager, it can be installed on any distribution, including Guix System. This means that you can install packages from nixpkgs using Nix on a Guix System. As pointed out in the comments by MegaTux, Guix is also a standalone package manager (that is shipped with the Guix System distribution) and can be installed on any distribution.
(This is not a "which distribution is better" question!) GNU GUIX and NixOS are two Linux distributions based on the NixOS package manager. I realize that GUIX seems to use Guile for defining packages/dependencies or other meta-data uses; and I'm guess everything in GUIX is GPL'ed, while perhaps not everything in NixOS is... but those seem more like superficial differences. What I'm hoping to understand is whether these two distributions have architectural differences of any significance.
Do GUIX and NixOS differ architecturally?
There are several parts to what login programs do. Login programs differ in how they interact with the user who's trying to log in. Here are a few examples:login: reads input on a text terminal su: invoked by an already logged-in users, gets most of the data from its command-line arguments, plus authentication data (password) from a terminal gksu: similar to su, but reads authentication data in X rlogind: obtains input over a TCP connection through the rlogin protocol sshd: obtains input over a TCP connection through the SSH protocol X display managers (xdm, gdm, kdm, …): similar to login, but read input on an X displayThese programs operate in similar ways.The first part is authentication: the program reads some input from the user and decides whether the user is authorized to log in. The traditional method is to read a user name and password, and check that the user is mentioned in the system's user database and that the password that the user typed is the one in the database. But there are many other possibilities (one-time passwords, biometric authentication, authorization transfer, …). Once it has been established that the user is authorized to log in and in what account, the login program establishes the user's authorization, for example what groups the user will belong to in this session. The login program may also check account restrictions. For example, it may enforce a login time, or a maximum number of logged-in users, or refuse certain users on certain connections. Finally the login program sets up the user's session. There are several substeps:Set the process permissions to what was decided in the authorization: user, groups, limits, … You can see a simple example of this substep here (it only handles user and groups). The basic idea is that the login program is still running as root at this point, so it has maximum privileges; it first removes all privileges other than being the root user, and finally calls setuid to drop that last but not least privilege. Possibly mount the user's home directory, display a “you have mail” message, etc. Invoke some program as the user, typically the user's shell (for login and su, or sshd if no command was specified; an X display manager invokes an X session manager or window manager).Most unices nowadays use PAM (Pluggable Authentication Modules) to provide a uniform way of managing login services. PAM divides its functionality into 4 parts: “auth” encompasses both authentication (1 above) and authorization (2 above); “account” and “session” are as 3 and 4 above; and there's also “password”, which is not used for logins but to update authentication tokens (e.g. passwords).
I am trying to understand how user permissions work in Linux. The kernel boots and starts init as root, right? Init then runs startup scripts and runs getty (agetty), again as root. Agetty just reads user name and runs login, still as root, I think. Nothing interesting yet. But what does login do? I wasn't able to find anything better than "it attempts to log in". Suppose login finds that password matches (and we trying to log in as usual user), how does it change user id? I thought that there should be system call for that but I wasn't able to find it (maybe I'm just blind?)Also, about su. su has the 'setuid' bit set so when we run it, it always runs as root. But when we tell it to log in as usual user, it again needs to change user id. Do I understand correctly that the same "magic" happens in su and login when they need to change user? If so, why have two different programs? Is there any additional sorts of serious business happening when running login?
login and su internals
From the Fedora documentation for rpm, spec files, and rpmbuild: The --target option sets the target architecture at build time. Chapter 3, Using RPM covers how you can use the --ignoreos and --ignorearch options when installing RPMs to ignore the operating system and architecture that is flagged within the RPM. Of course, this works only if you are installing on a compatible architecture.On the surface level, the --target option overrides some of the macros in the spec file, %_target, %_target_arch, and %_target_os. This flags the RPM for the new target platform.Under the covers, setting the architecture macros is not enough. You really cannot create a PowerPC executable, for example, on an Intel-architecture machine, unless you have a PowerPC cross compiler, a compiler that can make PowerPC executables.http://docs.fedoraproject.org/en-US/Fedora_Draft_Documentation/0.1/html/RPM_Guide/ch-rpmbuild.html So, as it says, make sure you have the additional compilers installed (for example gcc.i686 & gcc.x86_64).
I am building an rpm using rpmbuild command as: rpmbuild -bb --root <DIRECTORY> --target i386 --define "_topdir <DIRECTORY>" <specfile>.spec When I use my SLED 10 SP3 x86 machine, it runs successfully. But on my SLES 10 SP3 x64 Virtual Machine, it gives following error: error: No compatible architectures found for buildInitially I was not using --target option, still it was running on x86 machine, but same error was there in x64 machine. Please help me to resolve this error
How can I build a rpm for i386 target on a x86-64 machine?
(I'll try to be brief.) In theory, there are two dimensions of privileges:The computer's instruction set architecture (ISA), which protects certain information and/or functions of the machine. The operating system (OS) creating an eco-system for applications and communication. At its core is the kernel, a program that can run on the ISA with no dependencies of any kind.Today's operating systems perform a lot of very different tasks so that we can use computers as we do today. In a very(, very, very) simplified view you can imagine the kernel as the only program that is executed by the computer. Applications, processes and users are all artefacts of the eco-system created by the OS and especially the kernel. When we talk about user(space) privileges with respect to the operating system, we talk about privileges managed, granted and enforced by the operating system. For instance, file permissions restricting fetching data from a specific directory is enforced by the kernel. It looks at the some IDs assodicated with the file, interpretes some bits which represents privileges and then either fetches the data or refuses to do so. The privileges hierarchy within the ISA provides the tools the kernel uses for its purposes. The specific details vary a lot, but in general there is the kernel mode, in which programs executed by the CPU are very free to perform I/O and use the instructions offered by the ISA and the user mode where I/O and instructions are constrained. For instance, when reading the instruction to write data to a specific memory addres, a CPU in kernel mode could simply write data to a specific memory address, while in user mode it first performs a few checks to see if the memory address is in a range of allowed address to which data may be written. If it is determined that the address may not be written to, usually, the ISA will switch into kernel mode and start executing another instruction stream, which is a part of the kernel and it will do the right thing(TM). That is one example for an enforcement strategy to ensure that one program does not interfere with another program ... so that the javascript on the webpage you are currently visiting cannot make your online banking application perform dubious transactions ... Notice, in kernel mode nothing else is triggered to enforce the right thing, it is assumed the program running in kernel mode is doing the right thing. That's why in kernel mode nothing can force a program to adhere to the abstract rules and concepts of the OS's eco-system. That's why programs running in kernel mode are comparibly powerful as the root user. Technically kernel mode is much more powerful than just being the root-user on your OS.
According to http://www.linfo.org/kernel_mode.html in paragraph 7:When a user process runs a portion of the kernel code via a system call, the process temporarily becomes a kernel process and is in kernel mode. While in kernel mode, the process will have root (i.e., administrative) privileges and access to key system resources. The entire kernel, which is not a process but a controller of processes, executes only in kernel mode. When the kernel has satisfied the request by a process, it returns the process to user mode.It is quit unclear to me about the line,While in kernel mode, the process will have root (i.e., administrative) privileges and access to key system resources.How come a userspace process running not as root will have root privilege? How does it differ from userspace process running as root?
Process in user mode switch to kernel mode. Then the process will have root privileges?
Because of how waitpid works. On a POSIX system, a signal (SIGCHLD) is delivered to a parent process when one of its child processes dies. At a high level, all waitpid is doing is blocking until a SIGCHLD signal is delivered for the process (or one of the processes) specified. You can't wait on arbitrary processes, because the SIGCHLD signal would never be delivered for them.
The man page wait(2) states that the waitpid system call returns the ECHILD error if the specified process is not a child of the calling process. Why is this? Would waiting on a non-child process create some sort of security issue? Is there a technical reason why implementing waiting on a non-child process would be difficult or impossible?
Why can the waitpid system call only be used with child processes?
Both the kernel and the filesystem play a role. Permissions are stored in the filesystem, so there needs to be a place to store the information in the filesystem format. Permissions are enforced and communicated to applications by the kernel, so the kernel must implement rules to determine what the information stored in the filesystem means. “Unix file permissions” refer to a traditional permission system which involves three actions (read, write, execute) controlled via three role types (user, group, other). The job of the filesystem is to store 3×3=9 bits of information. The job of the kernel is to interpret these bits as permissions; in particular, when a process attempts an operation on a file, the kernel must determine, given the user and groups that the process is running as, the permission bits of the file, and the requested operation, whether to allow the operation. (“Unix file permissions” also usually includes setuid and setgid bits, which aren't strictly speaking permissions.) Modern unix systems may support other forms of permissions. Most modern unix systems (Solaris, Linux, *BSD) support access control lists which allow assigning read/write/excecute permissions for more than one user and more than one group for each file. The filesystem must have room to store this extra information, and the kernel must include code to look up and use this information. Ext2, reiserfs, btrfs, zfs, and most other modern unix filesystem formats define a place to store such ACLs. Mac OS X supports a different set of ACL which include non-traditional permissions such “append” and “create subdirectory”; the HFS+ filesystem format supports them. If you mount an HFS+ volume on Linux, these ACLs won't be enforced since the Linux kernel doesn't support them. Conversely, there are operating systems and filesystems that don't support access control. For example, FAT and variants were designed for single-user operating systems and removable media and its permissions are limited to read/read-write and hidden/visible. These are the permissions enforced by DOS. If you mount an ext2 filesystem on DOS, it won't enforce the ext2 permissions. Conversely, if you access a FAT filesystem on Linux, all files will have the same permissions. Successive versions of Windows have added support for more permission types. The NTFS filesystem was extended to store those extra permissions. If you access a filesystem with the newer permissions on an older operating system, the OS won't know about these newer permissions and so won't enforce them. Conversely, if you access an older filesystem with a newer operating system, it won't have contain of the new permissions and it is up to the OS to provide sensible fallbacks.
A question that occurred to me earlier: are file permissions/attributes OS- (and therefore kernel-) dependent or are they filesystem-dependent? It seems to me that the second alternative is the more logical one, yet I never heard of reiserfs file permissions, for example: only "Unix file permissions". On the other hand, to quote from a Wikipedia article:As new versions of Windows came out, Microsoft has added to the inventory of available attributes on the NTFS file systemwhich seems to suggest that Windows file attributes are somehow tied to the filesystem. Can someone please enlighten me?
How do file permissions/attributes work? Kernel-level, FS-level or both?
You can observe what the process does with the strace command. Strace shows the system calls performed by a process. Everything¹ a process that affects its environment is done through system calls. For example, creating a directory can only be done by ultimately calling the mkdir system call. The mkdir shell command is a thin wrapper around the system call of the same name. To see what mkdir is doing, run strace mkdir fooYou'll see a lot of calls other than mkdir (76 in total for a successful mkdir on my system), starting with execve which loads the process binary image, then calls to load the libraries and data files used by the program, calls to allocate memory, calls to observe the system state, … Finally the command calls mkdir and wraps down, finishing with exit_group. To observe what a GUI program is doing, start it and only observe it during one action. Find out the process ID of the program (with ps x, htop or any other process viewer), then run strace -o file_manager.mkdir.strace -p1234This puts the trace from process 1234 in the file file_manager.mkdir.strace. Press Ctrl+C to stop strace without stopping the program. Note that something like entering the name of the directory may involve thousands or tens of thousands of system calls: handling mouse movements, focus changes and so on is a lot more complex at that level than creating a directory. You can select what system calls are recorded in the strace output by passing the -e option. For example, to omit read, write and select: strace -e \!read,write,select …To only record mkdir calls: strace -e mkdir …¹ Ok, almost everything. Shared memory only involves a system call for the initial setup.
This is for academic purpose. I want to know which commands are executed when we do something in GUI, for example creating a folder. I want to show that both the mkdir shell command and create folder option from GUI does the same thing.
How to know which commands are executed when I do something in GUI
QEMU user emulation is exactly why your binary runs: on your system, one of the QEMU-related packages you’ve installed ensures that QEMU is registered as a handler for all the architectures it can emulate, and the kernel then passes binaries to it. As long as you have the required libraries, if any, the binary will run; since your binary is statically linked, it has no external dependencies. See Why can my statically compiled ARM binary of BusyBox run on my x86_64 PC? and How is Mono magical? for details.
I compiled a simple "Hello World" C program on Raspberry Pi 3, which was then transferred to an AMD64 laptop. Out of curiosity, I executed it, and it runs even though I did not expect it to: $ uname -a Linux 15ud490-gx76k 6.5.0-25-generic #25~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Feb 20 16:09:15 UTC 2 x86_64 x86_64 x86_64 GNU/Linux$ file hello64 hello64: ELF 64-bit LSB executable, ARM aarch64, version 1 (GNU/Linux), statically linked, BuildID[sha1]=486ee1cde035cd704b49c037a32fb77239b6a1c2, for GNU/Linux 3.7.0, not stripped$ ./hello64 Hello World!Like that, how can it execute? QEMU User Emulation is installed, but I don't know whether it is playing a part in this or not.
Why can an aarch64 ELF executable be run on an x86_64 machine?
Unix runlevels are orthogonal (in the sense "unrelated", "independent of" - see comments) to protection rings. Runlevels are basically a run time configurations/states of the operating system as a whole, they describe what services are available ("to the user") - like SSH access, MTA, file server, GUI. Rings are a hardware aided concept which allows finer grained control over the hardware (as mentioned in the wikipedia page you link to). For example code running in higher Ring may not be able to execute some CPU instructions. Linux on the x86 architecture usually uses Ring0 for kernel (including device drivers) and Ring3 for userspace applications (regerdless of whether they are run by root or another ordinary or privileged user). Hence you can't really say that a runlevel is running in some specific Ring - there are always1 userspace applications (at least PID 1 - the init) running in Ring3 and the kernel (Ring0).1As always, the "always" really means "almost always", since you can run "normal" programs in Ring0, but you are unlikely to see that in real life (unless you work on HPC).
The question stated below might not be technically correct(misconception) so it would be appreciable if misconception is also addressed. Which ring level do the different *nix run levels operate in? Ring tag not available.
Rings and run levels
The login binary is pretty straightforward (in principle). It's just a program that runs as root user (started, indirectly through getty or an X display manager, from init, the first user-space process). It performs authentication of the logging-in user, and if that is successful, changes user (using one of the setuid() family of system calls), sets appropriate environment variables, umask, etc, and exec()s a login shell. It may be instructive to read the source code, but if you do so, you'll find it easiest (assuming the standard shadow-utils login that Debian installs) to read it assuming USE_PAM is not set, at least until you are comfortable with its operation, or you'll find too much distraction.
I am wondering how the login actually works. It certainly is not part of the kernel, because I can set the login to use ldap for example, or keep using /etc/passwd; but the kernel certainly is able to use information from it to perform authentication and authorization activities. There is also a systemd daemon, called logind which seems to start up the whole login mechanism. Is there any design document I can look at, or can someone describe it here?
How does the Linux login work? [duplicate]
A bit more info in info uname: `-i' `--hardware-platform' Print the hardware platform name (sometimes called the hardware implementation). Print `unknown' if the kernel does not make this information easily available, as is the case with Linux kernels.`-m' `--machine' Print the machine hardware name (sometimes called the hardware class or hardware type).Basically classification types - you can have different hw implementations (-i) but with/in the same hw class (-m). Used, for example, to differentiate between kernel modules shared by the same hw class and modules specific to a certain hw implementation.
man uname -m, --machine print the machine hardware name -i, --hardware-platform print the hardware platform or "unknown"What exactly is meant by hardware platform here and how is it different from the "machine hardware name"? I found some related questions on SE but there seems to be some contradictions among the accepted answers. Where can I find accurate information about this nomenclature?
Meaning of hardware platform in uname command ouput
Consider: two processes can have the same file open for reading & writing at the same time, so some kind of communication is possible between the two. When process A writes to the file, it first populates a buffer inside its own process-specific memory with some data, then calls write which copies that buffer into another buffer owned by the kernel (in practise, this will be a page cache entry, which the kernel will mark as dirty and eventually write back to disk). Now process B reads from same point in the same the file; read copies the data from the same place in the page cache, into a buffer in B's memory.Note that two copies are required: first the data is copied from A into the "shared" memory, and then copied again from the "shared" memory into B. A could use mmap to make the page cache memory available directly in its own address space. Now it can format its data directly into the same "shared" memory, instead of populating an intermediate buffer, and avoiding a copy. Similarly, B could mmap the page directly into its address space. Now it can directly access whatever A put in the "shared" memory, again without having to copy it into a separate buffer. (Obviously some kind of synchronization is required if you really want to use this scheme for IPC, but that's out of scope).Now consider the case where A is replaced by the driver for whatever device this file is stored on. By accessing the file with mmap, B still avoids a redundant copy (the DMA or whatever into the page cache is unavoidable, but it doesn't need to be copied again into B's buffer).There are also some drawbacks, of course. For example:if your device and OS support asynchronous file I/O, you can avoid blocking reads/writes using that ... but reading or writing a mmapped page can cause a blocking page fault which you can't handle directly (although you can try to avoid it using mincore etc.) it won't stop you trying to read off the end of a file, or help you append to it, in a nice way (you need to check the length or explicitly truncate the file larger)
Can some one explain in an easy to understand way the concept of memory mappings (achieved by mmap() system call) in Unix like systems ? When do we require this functionality ?
Concept of memory mapping in Unix like systems
Put simply, a terminal is an I/O environment for programs to operate in, and a shell is a command processor that allows for the input of commands to cause actions (usually both interactively and non-interactively (scripted)). The shell is run within the terminal as a program. There is little difference between a local and remote shell, other than that they are local and remote (and a remote shell generally is connected to a pty, although local shells can be too).
I'm finding myself helping out some classmates in my computer science class, because I have prior development experience, and I'm having a hard time explaining certain things like the shell. What's a good metaphor for the shell in the context of the Terminal on Mac, contrasted with a remote shell via SSH?
Metaphor for the concept of shell?
Sort of. Check out User-mode Linux.
I know that Linux OS's are typically multi-programmed, which means that multiple processes can be active at the same time. Can there be multiple kernels executing at the same time?
Can there be multiple kernels executing at the same time?
GNU (Gnu is Not Unix) is an Operative System, created by Richard M. Stallman. You can use this operative system with different kernel: such as Linux kernel, Hurd kernel, Darwin kernel, etc. The X Window System (common on Unix like system) is just the basic layer for a GUI environment. Every Linux distribution is a GNU operative system with a Linux kernel and an X Window System; on top of X Windows, you have the window manager (GUI) such as Xfce, Gnome, or KDE that lets you easily use your system.
Can someone provide me with a very clear and practical example of a "windowing system"? I was reading on Linux, and although I've always known that it's a kernel, I didn't really know what a kernel is because I haven't taken an OS class yet. My understanding of it is that it's basically the layer between hardware and software. Would that be correct? Now the Linux distros everyone uses is combination of GNU/Linux/X Window System. I think I got the Linux kernel part, but what is a windowing system and what is GNU? Wikipedia says GNU is an OS, but then that would mean Linux distros are composed of another OS. Can someone clear this up for me?
What is a windowing system?
Nothing. Different Linux distributions, and the LSB, had different standards, so both are present on CentOS to make it easier to run software from different versions. One is just a symbolic link to the other. http://www.centos.org/docs/5/html/5.1/Installation_Guide/s2-boot-init-shutdown-init.html gives details on the boot process, but ultimately all the init scripts are almost-but-not-completely identical on the different Linux systems.
I know that rc*.d directories are used at startup, or reboot, or so on time, for starting or stopping programs. Can anybody explain me what's the difference between the rc*.d folders placed under the /etc/ path and the other placed under the /etc/rc.d/ path. Also, what's the difference between /etc/init.d and /etc/rc.d/init.d? Thanks. N.B. I'm running CentOS 6.2.
What's the difference between /etc/rc.d/rc*.d and /etc/rc*.d
You need to remove them simultaneously, and force their removal in spite of their “protected” status: dpkg --purge --force-remove-protected {gcc-12-base,libc6,libcrypt1,libgcc-s1}:i386
I'm using 64 bit Kali Linux, previously installed i386 architecture and now I want to remove it, because it downloads about 30Mb data for 32bit package every time apt update. I tried dpkg --remove-architecture i386, it failed with dpkg: error: cannot remove architecture 'i386' currently in use by the database Google says the i386 packages should be removed first, but some package like "gcc-12-base:i386, libc6:i386, libcrypt1:i386, libgcc-s1:i386" cannot be removed, how to solve it?
Cannot remove architecture i386
The hardware, the kernel and the user space programs may have different word sizes¹.You can see whether the CPU is 64-bit, 32-bit, or capable of both by checking the flags line in /proc/cpuinfo. You have to know the possible flags on your architecture family. For example, on i386/amd64 platforms, the lm flag identifies amd64-capable CPUs (CPUs that don't have that flag are i386-only). grep -q '^flags *:.*\blm\b' /proc/cpuinfo # Assuming a PCYou can see whether the kernel is 32-bit or 64-bit by querying the architecture with uname -m. For example, i[3456]86 are 32-bit while x86_64 is 64-bit. Note that on several architectures, a 64-bit kernel can run 32-bit userland programs, so even if the uname -m shows a 64-bit kernel, there is no guarantee that 64-bit libraries will be available. [ "$(uname -m)" = "x86_64" ] # Assuming a PCYou can see what is available in userland by querying the LSB support with the lsb_release command. More precisely, lsb_release -s prints a :-separated list of supported LSB features. Each feature has the form module-version-architecture. For example, availability of an ix86 C library is indicated by core-2.0-ia32, while core-2.0-amd64 is the analog for amd64. Not every distribution declares all the available LSB modules though, so more may be available than is detectable in this way.You can see what architecture programs on the system are built for with a command like file /bin/ls. Note that it's possible to have a mixed system; even if ls is a 64-bit program, your system may have libraries installed to run 32-bit programs, and (less commonly) vice versa.You can find out the preferred word size for development (assuming a C compiler is available) by compiling a 5-line C program that prints sizeof(void*) or sizeof(size_t). You can obtain the same information in a slightly less reliable way² by running the command getconf LONG_BIT. #include <stdio.h> int main() { printf("%d\n", (int)sizeof(void*)); return 0; }As for virtual machines, whether you can run a 64-bit VM on a 32-bit system or vice versa depends on your virtual machine technology. See in particular How can I install a 64bit Linux virtual machine on a 32bit Linux? ¹ “Word size” is the usual name for what you call bitness. ² It can be unreliable if someone installed an alternate C compiler with a different target architecture but kept the system default getconf.
Output of uname -a on my RHEL 5.4 machine is: Linux <machine name> 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64 x86_64 x86_64 GNU/LinuxDoes it mean that hardware is 64 bit (going by perhaps first x86_64) and OS is also 64-bit going by last x86_64? Also, what are these so many instances of x86_64? Can I install 64-bit vm over 32-bit OS and vice versa?
How to determine bitness of hardware and OS?
I’m not aware of a definitive list of possible values; however there is a list of values for all Debian architectures, which gives good coverage of the possible values on Linux: aarch64, alpha, arc, arm, i?86, ia64, m68k, mips, mips64, parisc, ppc, ppc64, ppc64le, ppcle, riscv64, s390, s390x, sh, sparc, sparc64, x86_64 (there are other possible values, but they’re not supported by Debian; I’m ignoring the Hurd here). Another source of information is the $UNAME_MACHINE matches in config.guess; this isn’t limited to Linux. Note that uname -m reflects the current process’ personality, and the running kernel’s architecture; not necessarily the CPU architecture. See Meaning of hardware platform in uname command ouput for details.
On my computer, uname -m prints x86_64 as output. What is the list of possible values that this command could output? I intend to use this command from a dynamic runtime to check the CPU architecture.
`uname -m` valid values
The Linux kernel syscall API is the the primary API (though hidden under libc, and rarely used directly by programmers), and most standard IPC mechanisms are heavily biased toward the everything is a file approach, which eliminates them here as they ultimately require read/write (and more) calls. However, on most platforms (if you exclude all the system calls to get you there) there is a way: VDSO. This is a mechanism where the kernel maps one (or more) slightly magic pages into each process (usually in the form of an ELF .so). You can see this as linux-vdso.so or similar with ldd or in /proc/PID/maps. This is effectively memory-mapped IPC between the kernel and a user process (albeit one-way in its current implementation). It's used to speed up syscalls in general and was originally implemented (linux-gate.so) to address x86 performance issues, but it may also contain kernel data and access functions. Calls like getcpu() and gettimeofday() may use these rather than making an actual syscall and a kernel context switch. The availability of these optimised calls is detected and enabled by the glibc startup code (subject to platform availability). Current implementations contain a (read-only) page of shared kernel variables known as the "VVAR" page which can be read directly. You can check this by inspecting the output of strace -e trace=clock_gettime date to see if your date command makes any clock_gettime() syscalls, with a working VDSO it will not (the time will be read from the VVARS page by a function in the VDSO page, see arch/x86/vdso/vclock_gettime.c). There's a useful technical summary here: http://blog.tinola.com/?e=5 a more detailed tutorial: http://www.linuxjournal.com/content/creating-vdso-colonels-other-chicken , and the man page: http://man7.org/linux/man-pages/man7/vdso.7.html
Are there any other interfaces, e.g. the /proc filesystem?
Are system calls the only way to interact with the Linux kernel from user land?
Yes, this depends on the type of filesystem. But all the modern filsystems I know of use a pointer scheme of some kind. The linux/unix-filesystems (like ext2, ext3, ext4, ...) do this with INODES. You can use ls -i on a file to see which inode-number is referenced by the filename (residing as meta-information in the directory-entry). If you use mv on these filesystems the resulting action will be a new pointer within the filesystem or a cp/ rm if you cross FS-borders.
I could see it going both ways. If the filesystem stores it's directory structure and list of files in each directory, and then points to the disk location of each of the files, it shouldn't require the file's data to actually be moved on disk in order to 'move' a file. On the other hand, I could see the 'move' being implemented by copying the file, checking the copy, and then deleting the original if the copy checks out. Does the answer depend on the type of filesystem?
When I move a file to a different directory on the same partition, does the file's data actually move on disk?
What part of linux handles and resolves these shortcuts?For the most part, individual applications or a window manager(WM)/desktop environment(DE). There are a few caught and handled by the kernel, such as VT switching with Cntl-Alt-F[N]. The actual event propagates:From the kernel To the Xorg server To the WM/DE To the applicationIf caught and handled at any point therein, it will probably not continue to the next level down. If you run a (non-GUI) application inside a GUI terminal, the GUI terminal will have precedence over it.What if several programs/processes share the same shortcut, how is priority resolved?The WM/DE will take priority over the application.
in Ubuntu (or for that matter most other linux distros), I could use the shortcut ctrl+t to open a new tab (in firefox or similar), or I could use alt+tab to make unity switch highlighted window, or I could use alt+ctrl+F<1-6> to get to another tty. What part of linux handles and resolves these shortcuts? What if several programs/processes share the same shortcut, how is priority resolved? (For the latter I'm assuming that this is only relevant for programs on different 'levels', e.g. firefox and the session script might compete, but firefox and chrome would never compete because they should not both be responding at the same time)
how is a keyboard shortcut given to the correct program?
From Wikipedia: Asymmetric multiprocessing (AMP) was a software stopgap for handling multiple CPUs before symmetric multiprocessing (SMP) was available.Linux uses SMP.
What the multiprocessing model for Linux? Is there a default or most commonly used model? Is it similar or very different from say BSD or even the MS Windows kernel? If SMP is used normally, can assymetric be used instead if desired?
What is the default or most commonly used multiprocessing model in Linux? Symmetric or Asymmetric?
Because the distinction remains a little vague to me, this may not be a very clear answer. I'll just try to expose my point of view, more than actual, technical facts. First of all, it is probably relevant to note that Linux is a UNIX-like system. This means that while most concepts and implementations have been inspired, sometimes taken, from UNIX, there was originally no common code base between the two systems. Actually, Linux was mostly inspired from MINIX, another UNIX-like system, the licensing of which Linus Torvalds found too restrictive.Why is Unix tripartite and Linux two-layered ? Is a Shell a complete different concept within Unix than in Linux ?To me, both are two-layered. The shell does not have any kind of privileged relationship with the kernel, nor should it. The first, privileged layer, is the kernel, where everything is possible. The second, unprivileged layer, is userland, in which various programs run, including the shell, and standard utilities such as ls. All these programs may communicate with the kernel through the UNIX or Linux set of system calls (these lists are probably not exhaustive). In my opinion, this is the only layer distinction which really needs to be mentioned when it comes to either UNIX or Linux. Now, while the kernel sees no difference between a shell and another program, the user certainly does in the way he interacts with each. If a difference has to be made between the shell and other programs, then this difference definitely comes from the user, but remains unknown to the system. This is much more striking in your video than it would be for users of today's systems. Have a look at their terminals: this is amazingly minimal, and we would probably never think of using such things nowadays (even though, I'll admit I'd love to). The thing is: back then, the shell was the first (and only) thing you got when your system had booted and you had logged in. This was the thing you had to go through if you wanted to run any other program. This is probably where the difference is: while the shell is no different from any other program in the kernel's eye, it is a gateway to other programs for the user, and this gateway was much more visible in the 70s, in "core UNIX's" prime. Of course, this distinction is a lot less significant nowadays, probably because of two things:Terminal emulation. You can actually get several shells at the same time, and switch between them. This means that you have something before the shell that gives you control over it. Graphical interfaces. You can now start processes from GUIs, window managers, desktop environments, ... without ever seeing a terminal. We even have graphical programs designed to wrap around shell instances and make them more pleasant to use.Now, I'm not very good at diagrams, but I guess I would put it this way:Where I would say that:Dashed lines represent user interaction. Dotted lines represent shell-to-process interaction (spawning processes, manipulating I/O flows between them, ...). Plain lines represent system interaction.If you remove everything but the elements involving system interaction, you end up with two things: the kernel, and user programs. There are two layers, connected by system calls. Now if, as a user, you see the shell not just as another program, but as a gateway to others, you add user interaction and shell-to-process interaction. Here comes the third layer, yet nothing has changed for the kernel.
I watched a short intro to Unix from the 70s (https://www.youtube.com/watch?v=7FjX7r5icV8 3D animation starts at 1:56), at the end the general tripartite architecture of Unix was displayed as a 3D animation. Because I have seen already diagrams of the ovarall Linux architecture, I became confused. Both diagrams, Unix and Linux, share the Kernel, but then Unix is wrapped by the Shell and the Shell by the Utilities. Linux instead is only wrapped by the Userspace, and the Shell does not wrap anything but is just one of many processes within the Userspace. How do Unix and Linux differ on a very basic level, what do they have in common ? Why is Unix tripartite and Linux two-layered ? Is a Shell a complete different concept within Unix than in Linux ?
What are the very fundamental differences in architecture between Unix and Linux? [duplicate]
Intel Core 2 (i5) is a 64-bit processor supporting Intel 64, Intel 64 is Intel's implementation of x86-64
I'm going to install Arch linux yet I have to choose between several architectures my computer has. I have an aluminium macbook pro, with a 2.3 GHz Intel Core i5 processor. . The intel webpage tells me this is a dual core processor. running uname -a in the shell returns: Darwin Romeos-MacBook-Pro.local 11.3.0 Darwin Kernel Version 11.3.0: Thu Jan 12 18:47:41 PST 2012; root:xnu-1699.24.23~1/RELEASE_X86_64 x86_64Which makes me believe I have an x86_64 machine, yet when executing arch in the shell it returns i386. I'm a bit confused about what to pick:i686 CPU x86-64 CPU Dual ArchitectureWhat would you recommend?
Which arch linux should I download?
As you can have a 32-bit Linux installed in a 64-bit machine, the safer way seems to be verifying CPU capabilities. For Intel and compatible processors: grep -o -w 'lm' /proc/cpuinfo http://www.unixtutorial.org/2009/05/how-to-confirm-if-your-cpu-is-32bit-or-64bit/What you're looking for is the following flag: lm. It stands for X86_FEATURE_LM, the Long Mode (64bit) support. If you can find the "lm" flag among your CPU flags, this means you're looking at a 64bit capable processor.
I'm writing a program in Java and I need to determine the architecture for which Linux was compiled. I need something like uname -m, but without running any program, but instead from the /proc pseduo-fs. What is a reliable source to read from?
Get Linux architecture from /proc filesystem
In fact, Debian installs the majority of PMA into /usr/share/phpmyadmin which is the LSB standard correct location for it. But that's a detail that's not terribly relevant to the premise of your question. What Debian's PMA package also does is drop a config file in /etc/apache2/conf-available/phpmyadmin.conf that sets up the specifics PMA needs to run properly. You can look into it on your own time if you want the details, but what it boils down to is that from that point on PMA can and will work with every site you configure that has working PHP available, simply by adding the following line to the <VirtualHost> directive: Alias /phpmyadmin /usr/share/phpmyadminAt that point PMA should work for that site without any further actions required. (Also, drat. Ninja'd.)
I have a remote machine with LAMP and PHPMyAdmin (PMA). Let's assume this distro is Debian/Ubuntu. If I install PMA via apt install phpmyadmin (which will make it to be installed under /usr/share/phpmyadmin/ I think) then I wouldn't be able to navigate to PMA based on domains of my websites hosted on that lamp (the following will error): example-1.com/phpmyadmin example-2.com/phpmyadminIf I remember correctly, I'll have to navigate via say MY_IP_ADDRESS/usr/share/phpmyadmin/ to access PMA successfully. But if I'll install PMA directly on the document root via the following way I would indeed be able to navigate to PMA based on domains (as shown above): pma="[pP][hH][pP][mM][yY][aA][dD][mM][iI][nN]" cd /var/www/html/ rm -rf ${pma}* wget https://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-languages.zip unzip ${pma}*.zip mv ${pma}*/ phpmyadmin/ rm ${pma}*.zip unset pma cdOn the one hand, installing PMA with apt install phpmyadmin is simple and convenient. On the other hand but doesn't let me navigate to it based on domains. On the other hand, I do want to navigate to it just based on domains. If I'm not wrong, a symlink can be helpful. Am I in the right direction (I can't test now)?
Accessing PHPMyAdmin as installed by its distro package-index from the domain of each website
Kernel mode and user mode are a hardware feature, specifically a feature of the processor. Processors designed for mid-to-high-end systems (PC, feature phone, smartphone, all but the simplest network appliances, …) include this feature. Kernel mode can go by different names: supervisor mode, privileged mode, etc. On x86 (the processor type in PCs), it is called “ring 0”, and user mode is called “ring 3”. The processor has a bit of storage in a register that indicates whether it is in kernel mode or user mode. (This can be more than one bit on processors that have more than two such modes.) Some operations can only be carried out while in kernel mode, in particular changing the virtual memory configuration by modifying the registers that control the MMU. Furthermore, there are only very few ways to switch from user mode to kernel mode, and they all require jumping to addresses controlled by the kernel code. This allows the code running in kernel mode to control the memory that code running in user mode can access. Unix-like operating systems (and most other operating systems with process isolation) are divided in two parts:The kernel runs in kernel mode. The kernel can do everything. Processes run in user mode. Processes can't access hardware and can't access the memory of other processes (except as explicitly shared).The operating system thus leverages the hardware features (privileged mode, MMU) to enforce isolation between processes. Microkernel-based operating systems have a finer-grained architecture, with less code running in kernel mode. When user mode code needs to perform actions that it can't do directly (such as access a file, access a peripheral, communicate with another process, …), it makes a system call: a jump into a predefined place in kernel code. When a hardware peripheral needs to request attention from the CPU, it switches the CPU to kernel mode and jumps to a predefined place in kernel code. This is called an interrupt. Further readingWikipedia What is the difference between user-level threads and kernel-level threads? Hardware protection needed for operating system kernel
I read that there are two modes called “kernel mode” and “user mode” to handle execution of processes. (Understanding the Linux Kernel, 3rd Edition.) Is that a hardware switch (kernel/user) that is controlled by Linux, or software feature provided by the Linux kernel?
Are “kernel mode” and “user mode” hardware features or software features?
3.2.0 is the version of the source code used to compile this kernel. These can be four numbers long (e.g. 2.6.32.55) indicating a patch level on that version. However, this four digit system was only used for version 2.6 kernels starting at 2.6.8. I.e., it is not used with 3.x kernels, which are the 3 numbers, release-major-minor. Note the subtle difference from the three number major-minor-patch level system commonly used with software. -24-generic indicates a patch level and configuration used by the distro, 24 being their patch level, and generic being the configuration used in compiling. This patch level does not necessarily reset/change for different kernel source versions; the distro either applies the patches unchanged (so, e.g., 3.2.1-24-generic) or they increment the patch level (3.2.1-25-generic). The most significant aspects are the source version number and the configuration style. The later is important because it indicates significant differences in the way the kernel was actually configured for build. This doesn't reveal which architecture the kernel was built for -- e.g., x86_64 -- but the uname -m output does.
I want to install a package, and it has different versions for different OSes. The description in the package site is like this X86-64 Linux 3.0 KernelI looked it up and found people saying to use uname -r uname -mI tried it and got this: 3.2.0-24-generic x86_64Does this tell me the Linux I'm using is x86_64 and 3.2.0 Kernel? What does -24-generic mean?
How to check Linux kernel?
Why Sparc specifically? ARM or MIPS is easier to emulate or to get in hardware, both are bi-endian, and both are supported by Linux in either endianness. There doesn't seem to be a well-maintained ARM big-endian port, your best bet for ARM seems to be the old Debian NSLU2 port. For MIPS you have the MIPS port. QEMU can emulate all of these CPUs.
I need to get a big endian platform to develop with gcc and g++, what is a solution for that? I know that the SPARC is one of those big endian architectures, but I have no idea what OSs can run on it and how to emulate a SPARC machine under Linux; I also should note that I need any big endian that I can emulate on an X86 but with g++ available on it.
How I can emulate a big endian platform on a x86?
The boot sequence of linux/unix has many stages, and there are many references and answers on this site that explain the detail. But to summarise; Eventually the kernel is loaded with drivers so that the disk and devices can be used, it then starts process with a pid (process id) of 1. Traditionally this program was a program called init but today there are several newer programs (systemd or upstart). It depends on your distribution and version, which one is used. Starting up is a tiered process. There is a concept of escalating run levels (1,2,3,4,5,6 ...) and the start up program will flip between these levels automatically or staged (so that the user can gain control).being the initial step (single user mode), multi user mode, multi user with networking GUI mode ... .. 6., ...These run levels are not fixed in stone either, they depend on the distribution and start up program being used (init, systemd, ...) and convention. The levels also depend on how the staged start-up/shutdown pattern has been designed. (think, linux is used in routers, android phones, servers and desktops all with different requirements). In order to transgress from one run-level to another various other programs (services), like bind (for DNS), networking, routing, webservers, ... are started or stopped, and bash may be used then to run a particular script which starts or stops a service. Eventually you need to login, either at a console or at a graphical interface, and you may be prompt for your username and password. Let's take a simple route, and say you are at a non-graphical console, and the login program is prompting you to authenticate. When you pass, it will read which shell is configured for the entered username from /etc/passwd and start it, with input and output set to your console and then you have the prompt and can start doing your work. So in this scenario, init starts -> login which starts -> bash So every process is a child of the first process, (it might be more accurate to say, every process has pid 1 as an ancestor). In the above example, login will exec the shell, replacing login process with bash, the process id doesn't change. When you look with ps it looks like bash was started by init because it's parent pid is 1, but there was a chain of events. There's nothing really stopping pid 1 from just starting bash at the console (if pid 1 can work out what the console is at that point) and this is down to how the start-up sequence is designed. (I had to do that once, but it is not normal practice).
I do not understand when does a shell, lets say bash, get executed, which program runs bash initially first.
When does a shell get executed during the linux startup process
If you don't have the cpu, I presume you are buying one or something. If that is the case, then you can find out everything about the prospective cpu you are going to buy by looking up the data by the model number of the cpu you are looking at. You can guess the architecture by the manufacturer, as most manufacturers (e.g., Intel) only produce a small number of architectures (for intel, currently, AMD64 aka x86-64, but i386 and IA-64 in the past). Typically the model number of the cpu will allow you to look up even more detailed information. Wikipedia typically has well collected data in tables on this, but you can also typically find this on the manufacturers' websites. For your specific example i5-6300hq, a google search finds a reference to it in the wikipedia page https://en.wikipedia.org/wiki/List_of_Intel_Core_i5_processors (with a specific table entry for your example further down) which in turn calls this an "Intel Core" processor, which links to https://en.wikipedia.org/wiki/Intel_Core In the side bar on this page, it lists x86-64, linked to https://en.wikipedia.org/wiki/X86-64 and the first line of that page lists AMD64. Each of these pages has abundant details on what each classification means and how it relates to similar cpus, including the outdated i386 and IA-64.
In the Debian download CD/DVD images page they have different ISO's for the different instruction set architectures. How do I know what is the ISA of a CPU before I buy one? I know about using the commands cat /proc/cpuinfoand lscpubut these are only good after getting the CPU and running these commands on a Linux based OS. How do I find out this information before getting the CPU? For example the CPU: Intel(r) core(tm) i5-6300hq cpu @ 2.30ghz In the official intel website they show the ISA is "64 bits". But nothing specific as mentioned in the debian website: amd64 / arm64 / armel / armhf / i386 / mips64el /mipsel / ppc64el / s390x / multi-arch Can someone tell me how they would go about finding this information?
How to find out what is the Instruction Set Architecture (ISA) of a CPU?
The architecture is the processor type. There are only a relatively small number of architectures. All processor types that execute the same user code are classified as the same architecture, even though there may be several different ways to compile the kernel; for example x86 and powerpc are a single architecture but the kernel can be compiled using the 32-bit instruction set or the 64-bit instruction set (and a 32-bit kernel can execute only 32-bit programs, while a 64-bit kernel can execute both 32-bit and 64-bit programs). The platform describes everything else about the hardware that Linux cares about. This includes variations on the way booting works, on how some peripherals such as a memory controller, a power management coprocessor, cryptographic accelerators and so on work, etc. Whether features are classified according to a platform or are separate drivers or compilation options depends partly on how fundamental the feature is (i.e. how difficult it is to isolate the code that uses it) and partly on how the person who coded support for it decided to do it.
I want to know the difference between architecture and platform in Linux kernel. When I had downloaded the latest kernel tarball, observed that a directory named with arch, it contains different names of processors & inside to any one processor directory again there is a directory called platform. For example:- /arch/powerpc is a directory under arch in Linux kernel & /arch/powerpc/platforms is a directory under powerpc. So, what does this actually mean? Can anyone explain this in detail, referring from hardware perspective to software perspective, please?
Difference between architecture and platform in linux kernel
What /dev/sda means There are four levels: raw disk, raw partition of that disk, formatted filesystem on a partition, and actual files stored within a filesystem. /dev/sda means an entire disk, not a filesystem. Something with a number at the end is a partition of a disk: dev/sda1 is the first partition of the /dev/sda disk, and it's not even necessarily formatted yet! The filesystems each go on their own partitions by formatting each partition with its filesystem. So, what will generally happen is that you'll partition /dev/sda, format /dev/sda1 with a filesystem, mount /dev/sda1's filesystem to somewhere, and then begin working with files on that filesystem. Why have a unified filesystem Linux (and UNIX in general) has the concept of the virtual filesystem. It combines all your real disks into one unified file system. This can be quite useful. You might, for example, want to put your operating system and its programs on one really fast real disk and all the users' personal files on another fairly slow but huge disk because you want the OS to be fast but you want an affordable means of handling the files of thousands of users. Unlike the usual method in Windows, which by default breaks each disk up into a separate letter and where using D:\Users might break some programs that hard code the path C:\Users, this can be done with ease and fluency. You format one partition in each disk, you mount the OS one to / and the user one to /home, and it acts like a system that put everything on one real disk, except you get that speed and affordability tradeoff you wanted.
I am installing CentOS Linux distribution. At the partition step, CentOS tells me that it has detected a sda HD in my machine and I should create partitions and assign mount points for this disk. But I found the logic a little twisted. I understand that Linux treat everything as file and sda is usually the device file representing my first SATA hard disk. But since no Linux is installed yet, there should be no file system yet. So how could there be any device file like sda? Someone tells me that “Linux installer is also a Linux OS and hence there's a in-memory file system. My hard drive is just one tiny element of the file system”. Why doing like this? Does Windows or other OS do the same thing?
How could Linux use 'sda' device file when it hasn't been installed?
Since it happens to be a bash script (despite the .sh extension), you can always do (within bash): uname() if [ "$#" -eq 1 ] && [ "$1" = -m ]; then echo arm64 else command uname "$@" fiexport -f unamegclone.shThat is, replace uname with an exported function that outputs what you want when passed a -m argument.
I am trying to execute this shell script - https://raw.githubusercontent.com/oneindex/script/master/gclone.sh This shell script checks for uname -m output and doesn't like it ( i.e. aarch64 ). xd003@localhost:~$ uname -m aarch64 xd003@localhost:~$I want to change the uname -m output from aarch64 to arm64 so that it bypasses this check in the shell script and execute properly.
How do i change the output of "uname -m"
On a Debian-based system, the bullet-proof way of determining the architecture, as appropriate for use in a package’s file name, is dpkg --print-architectureNote that architecture-independent packages use all there, and you’d have to know that in advance.
Suppose I have a makefile that builds my package, and I only want the package to build if the package file is not present: package: foo_0.0.0_amd64.deb cd foo-0.0.0 && debuild -uc -usSo I am new to the debian build process, but I am anticipating that I'll either find a way to build for different architectures, or I'll be on a different architecture natively and that file name will change. So, I set it as a variable: major=0 minor=0 update=0 release=amd64 package: foo_${major}.${minor}.${update}_${release}.debI have a machine where uname -r yields #.##.#-#-amd64. What is the bulletproof way to fetch that amd64 in unix/linux?
Building packages: command which yields 'amd64' (like uname)
There’s nothing wrong with your setup, the problem here is the package pool and the web site. The rust-doc package was disabled with the 1.24.1+dfsg1-1~deb9u1 upload:Disable -doc package, requires packages not found in stretch and docs are available online anywayAs a result the package is no longer included in the indexes and isn’t available from apt’s perspective. The package which can still be downloaded from the web site is the old 1.14.0 release. I’ve informed the site team about the discrepancy. You’ll be able to install the package again normally once Debian 10 is released and you upgrade to that.
I'm running Debian Stretch. According to the Debian website, I should be able to install the package rust-doc, yet I can't: wizzwizz4@myLaptop:~$ sudo apt install rust-doc Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package rust-docEverything seems to say it doesn't exist. But it does. Do I have to do something special to install all-arch packages, or something? The output of apt policy is normal.
Can't install rust-doc on Debian Stretch
The default options for mke2fs including those for ext4 can be found in /etc/mke2fs.conf. They could be different depending on the distro you're using. I'd take a look at that file on any distro you're curious about to see if the -O ^64bit param would be necessary. According to the man page the '^' is indeed the prefix used to disable a feature. The effect of not using 64bit ext4 is that you'll be limited to ~ 15T volumes. Where as you can have 1EiB volumes if you use the 64Bit flag. HOWEVER, 16T is the recommended max volume size for ext4 anyway.
I am using GParted (0.28.1, Fedora 25) to format a external drive and noticed that the command displayed is: mkfs.ext4 -F -O ^64bit -L "INSTALL" /dev/sdd1When making disks in the past from command line I have just used mkfs.ext4 DEVICE which seemed to work well for various architectures. However the above includes the option -O ^64bit, which I guess removes some default 64bit feature of the filesystem so it works with 32bit. Does it do this and is normally necessary to pass it on modern Linux OSs (to enable compatibility with 32bit etc systems), and what cost could it have other than probably reducing the volume size limit?
What does this mkfs.ext4 operand mean?
Where do application layer protocols reside? Protocols are an abstraction, so they don't really "reside" anywhere beyond specifications and other documentation. If you mean, where are they implemented, there's a few common patterns:They may be implemented first in native C as libraries which can be wrapped by for use in other languages (since most other languages are themselves implemented in C and have a C interface). E.g., encryption protocols are generally like this. They may be implemented from scratch as libraries or modules for use in a specific language, using just that language (and/or the language it is implemented in). E.g., high level networking protocols. They may be implemented from scratch by a given application.These are all pure userland implementations, but some protocols -- e.g., low level networking -- may be implemented in the kernel. This may include a corresponding native C userland library (as with networking and filesystems) or the kernel (including independent kernel modules) may provide a language agnostic interface via procfs, /dev, etc.
Where do application layer protocols reside? Are they part of library routines of language e.g. C, C++, Java? As goldilocks says in his answer, this is about the implementation of application layer protocols.
Are application layer protocols part of library routines?
The --build and -host options are to configure scripts are standard configure options, and you very rarely need to specify them unless you are doing a cross-build (that is, building a package on one system to run on a different system). The values of these options are called "triples" because they have the form cpu-vendor-os. (Sometimes, as in your case, os is actually kernel-os but it's still called a triple.) The base configure script is quite capable of deducing the host triple, and you should let it do that unless you have some really good evidence that the results are incorrect. The script which does that is called config.guess, and you'll find it somewhere in the build bundle (it might be in a build-aux subdirectory). If you're doing a cross-build and you need to know the host triple, the first thing to try is to run config-guess on the host system. The values supplied (or guessed) for --host and --build are passed through another script called config.sub, which will normalize the values. (According to the autoconf docs, if config.sub is not present, you can assume that the build doesn't care about the host triple.) The developers of a specific software package might customize the config.sub script for the particular needs of their build, and there are a lot of different versions of the standard config.sub script, so you shouldn't expect config.sub from one software package to work on another software package, or even on a different version of the same software package. Despite all the above, autoconf'ed software packages really should not need to know the names of the host os and vendor, except for identifying default filesystem layout so that they provide the correct default file locations. You can read through config.sub to get an idea of the range of options which will be recognized, but it is not so easy to figure out how the values are used, or even if the values are used. The first field -- the cpu -- is the most likely to be used. You can get a list of all the options by typing: ./configure --helpor, better, ./configure --help | lesssince there are always a lot of options. Other than the standard options (--build, --host and --target as above, and the options which override file locations), the specific options allowed by each configure script are different. Since they also tend to change from version to version of the software package, you should always check the configure script itself rather than relying on external documentation. Unfortunately, the contents of the configure script's help are not always 100% complete, because they rely on the package developers to maintain them. Sometimes unusual or developer-only options are not part of the ./configure --help output, but that is usually an indication that the option should not be used in a normal install.
When I am running a line like: ./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu {*shortened*} \ --with-imap-ssl=/usr/include/openssl/ --enable-ftp --enable-mbstring --enable-zipI understand what the "x86_64-redhat-linux-gnu" means descriptively, but I have questions? 1) Is there a list somewhere of all the choices? Either in each configure script or on the internet. 2) Does making the answer more specific or more generic have much of an effect on the outcome? Thank you.
Compiling from source: What are the options for config script "build"?
There is a distinct difference here: On a ext2/3/4 filesystem, "deleting" a file by its name means that the reference to the inode, i.e. the data structure where the file data is attached to, is removed (the filename you see in ls is merely a reference to that inode). A file is only considered "deleted" when the last such reference is gone (if you are interested, you can look into the concept of "hard links"). However, if you open a file, that act also creates a reference to that inode, so as long as the file is open, it is not actually "deleted" and the process that has the file open can still work with it. The same holds true for an open directory. As long as you are still cd'd in the directory, the directory is still there, even if you deleted it from another shell instance. It is already in a "degraded" state however, and no longer accessible from other processes, and you cannot create new files in that directory even from the shell instance that is still there. (Notice by the way that I cannot reproduce the behavior you showed in your second example: when I run the same code in bash 4.3, pwd prints the directory name correctly even after it is deleted from the main shell instance). Unmounting a device on the other hand is used to sever all connections to the files contained, and flushing all changes made, so the operating system will refuse to do so when someone is still "in there".
Linux appears not to mind if I move or delete a file or directory that is still in use by a process. So why does it complain if I try to unmount a device that is in use as a working directory by a process? Example: $ mkdir -p a b $ sudo mount --bind a b $ sh -c 'cd b; sleep 10' & [1] 215679 $ sudo umount b umount: /home/laktak/b: target is busy. $ [1]+ Done sh -c 'cd b; sleep 10' $ sudo umount b $As opposed to: $ mkdir c $ sh -c 'cd c; sleep 10; pwd; cd ..' & [1] 220382 $ rmdir c $ $ pwd: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory $ [1]+ Done sh -c 'cd c; sleep 10; pwd; cd ..'
Why are mounts with an active working directory "busy"?
For some reason you've ended up with i386 cups-daemon installed, instead of amd64. That's why it ends up needing i386 PAM modules... To fix this, you need to remove cups-daemon and re-install the amd64 version; as root: apt-get remove cups-daemon:i386 apt-get install cups-daemon:amd64If your dpkg architectures are set up correctly you should be able to drop the :amd64 portion of the last command.
I just upgraded a Debian Wheezy to Jessie, by changing the /etc/apt/sources.lst. If important: I pinned systemd to stay with sysvinit, and after the upgrade I removed the pin. If I now do apt-get update && apt-get dist-upgrade everything is up to date. So far, everything works fine, only a problem with CUPS authentication arised: On Wheezy there was already CUPS installed and worked, remote access was granted with cupsctl --remote-admin. I could authenticate on the web interface with root:myrootpassword. After the upgrade I did cupsctl --remote-admin again and it worked so that I can reach the admin web panel. Changing options still does require authentication, but using root:myrootpassword does not work anymore. I looked up the logfile /var/log/cups/error_log which prints when trying to authenticate on the CUPS web interface: pam_authenticate() returned 28 (Module is unknown)Then I looked into /etc/pam.d/cups, which has: @include common-auth @include common-account @include common-sessionThese three included files exist in the same directory and are non-empty. I have however no experience with pam. These packages are installed: # dpkg --get-selections | grep pam libpam-cap:amd64 install libpam-ck-connector:amd64 install libpam-modules:amd64 install libpam-modules-bin install libpam-runtime install libpam0g:amd64 install libpam0g:i386 installThe file /var/log/auth.log has: Apr 17 15:01:14 mypc cupsd: PAM unable to dlopen(pam_ck_connector.so): /lib/security/pam_ck_connector.so: cannot open shared object file: No such file or directory Apr 17 15:01:14 mypc cupsd: PAM adding faulty module: pam_ck_connector.soHowever package pam_ck_connector is installed and latest version. Doing a find / -name pam_ck_connector.so gives: /lib/x86_64-linux-gnu/security/pam_ck_connector.soSo it seems this files is simply in a wrong path. I tried setting a symbolic link, but then I get in /var/log/auth.log for this file: wrong ELF class: ELFCLASS64Then I installed The i386 Package: apt-get install libpam-ck-connector:i386which installs to /lib/i386[...]/security/libpam-ck-connector.so. I set again a symbolic link. But then the same massage popped up for pam_cap. So, do we have some problem with 32<->64 Bit compatibility of some package (libpam* or cups) or a bug in Debian package manager/database? It can't be the correct way to have people install these things manually and set symbolic links, or is it? How can I fix this error message to make authentication work with root:myrootpassword again from the CUPS web panel?
CUPS not working correctly after Debian Wheezy -> Jessie upgrade because of a faulty libpam
I tried to crossgrade too, ended up with the same results as you. Reinstalling the system is the easiest and fastest way of troubleshooting your problem.
I was trying to crossgrade my architecture from i386 to amd64 (from https://wiki.debian.org/CrossGrading) and I got some error and broke apt-get and dpkg. apt-get and dpkg output cannot execute binary file: Exec format errorsudo apt-get outputs /usr/bin/apt-get: 3: /usr/bin/apt-get: Syntax error: ")" unexpectedHere is the output from running some commands to give you the idea: http://paste.debian.net/949117/ uname -a outputs Linux chowder 3.16.0-4-686-pae #1 SMP Debian 3.16.43-2 (2017-04-30) i686 GNU/Linuxso I think I'm i686 which I think is 32 bit. Therefore I think the issue is that I'm on a 32 bit system running 64 bit apt-get and dpkg. This makes sense because I could have done the "Crossgrade dpkg, tar, and apt" part of that wiki without properly crossgrading my architecture - I could have missed an error. Eventually I want to be on an amd64 bit architecture to download chrome and all sorts of stuff, but first I'll need to fix my apt-get and dpkg, and maybe that end goal is just a pipe dream. Should I just reinstall my os instead of crossgrading? Should I downgrade apt-get and dpkg (change them from 64 bit to 32 bit)? If so, where can I get an official copy of apt-get or dpkg - 32 bit? I wonder how I would have to install it once I got it too... I was on the #debian IRC chat as nate_ (and nate__ at one point because I accidentally had two tabs open) talking about this issue, but had to leave before I got an answer. There "flying_commands" said "maybe you could manually extract the i686 debs from debian.org (on another machine?) to get the binaries back?" but I'm not quite sure how to do that, and how to install the debs without dpkg working. Thank you to those at #debian, who helped extract a lot of this information so far. And thanks in advance to anyone who can help out.
I broke apt-get and dpkg when trying to crossgrade my Debian architecture
Since https://ark.intel.com/content/www/us/en/ark/products/95066/intel-nuc-kit-nuc7i3bnh.html states, that it supports Windows x64 you should go for amd64.
I tried to search for the correct architecture to use for my Intel NUC7I3BNH, but I am none the wiser. Which architecture would be most appropriate - amd64 or i386?
Which Debian architecture should I use for my Intel NUC7I3BNH?
You are correct in saying that, when you open a terminal, you are using a shell. The primary job of a shell is to help you run executable programs. So what is an executable? Type lsand hit return. It should print out the files in the current directory. Now, this looks like the shell is running some sort of inbuilt command called ls, right? Wrong! It's actually creating a new Linux process that runs the executable program /usr/bin/ls. So why did it write the list of files to the shell? Well, the ls program doesn't know anything about the shell. In fact it has no real idea where it's going to write to. What the programmer did was make the program write the list to something called standard out. The shell then used a Linux trick called a pipe to make the list print to the terminal. Interestingly, the shell can also make this standard out go to other places. For example, typing ls > /tmp/ls.out won't print to the terminal. It actually is sending the list to a file in the /tmp directory. Even more interestingly, typing ls | less makes the shell start the ls program as well as the less program and pipes the standard out of ls to the standard in of less. Neither of these programs knew anything about the shell nor, in fact, did the shell know about how the programs work: if the program's been coded in the standard way, it'll all just work. Now, to the node.js case. Again, the shell just started the node.js program. If you don't supply arguments, this program tries to read from standard in just like less. As you didn't pipe anything to it, the shell just hooked up the keyboard so that anything you type gets sent to node. The shell also hooked up standard out to the terminal so that anything node wrote went to the terminal as we saw ls do. The net effect makes it appear that the shell now understands JavaScript but not so. It really just understands executing programs and redirecting in/out (at least in this case). It's node doing the JS.
I'm learning to use the terminal on my Ubuntu 14.04 and I'm running command line code in my shell (which I'm told is what is inside the terminal) to install programs. But I can also start my node.js server in my shell and I can then run javascript code in the terminal; it keeps track of values I store in variables and I can create functions and then use them and so on. However it does seem to change mode because I'm no longer in a specific folder of my operating system so maybe I'm no longer in my shell? So I started looking into shell commands: What Are "Commands? According to http://linuxcommand.org/lc3_lts0060.php commands can be one of 4 different kinds:An executable program like all those files in /usr/bin. Within this category, programs can be compiled binaries such as programs written in C and C++, or programs written in scripting languages such as the shell, Perl, Python, Ruby, etc. A command built into the shell itself. bash provides a number of commands internally called shell builtins. The cd command, for example, is a shell builtin. A shell function. These are miniature shell scripts incorporated into the environment. An alias. Commands that you can define yourselves, built from other commands. Does this mean that I'm always running the higher level code I have in my files (e.g. x.php, x.js x.css x.html files) with the help of my shell every time I start a program? Or does it only mean that I can use the command line to start a program which then run somewhere else (if somewhere else, then where?)? How can you grasp the interaction between these different types of code/languages? For example: can you view it all as code put into a command line; line after line with some languages making calls to other languages which then return the control to the caller and so on, or what kind of mental model is useful for understanding what is going on?
What is the relationship between command line code and higher level language code?
There are ABIs (application binary interfaces) at multiple levels. To make a binary that works everywhere, one of these levels must be targeted. The two levels to target are either the kernel ABI or the LSB ABI. A .deb package may not be compatible between Debian and Ubuntu because it may have dependencies on other things that are in one flavor or the other but not both. However, a lot of .deb packages will run in some range of versions of Ubuntu and Debian. Worse, an rpm from RedHat is unlikely to work on an Ubuntu system just because the packaging format is not understood. The dependencies are what makes executables not run in different linux flavors. Almost all programs rely on external libraries. When executables are being built, Linux (and many other operating systems including windows and other unixes) instead of including those external libraries as part of the executable, they only link it to a stub, and the actual library is shared between many executables. This reduces the size of the executables both on disk and in memory, and in some cases, allows bugs in the libraries to be fixed without rebuilding the executables. However, if you don't have the correct version of the shared library in your flavor of linux, the executable won't work. Executables that use shared libraries are said to be dynamically linked (meaning, final linkage to the shared libraries is done dynamically at runtime). It is also possible to statically link executables. The result is a much larger executable, but it will run on a much wider variety of systems. At that point, the ABI is not the flavor of linux, but the kernel itself. Generally, kernels are backwards compatible, and executables are forwards compatible, meaning that (within some wide range of kernel versions), old executables will run on newer kernels, and newer kernels can run older executables. There have been a very few times in the history of linux where executable format itself changed, so there are limits to this as well. The architecture of the cpu the operating system is running on must also match the executable, with a few narrow exceptions. (Generally i386 binaries can run on x86_64 systems if you have the relevant matching libraries.) It is possible to make a fat binary that contains code for multiple architectures of cpus in it. This is very common in some operating systems, but rare in linux. It's also possible to have containerized executables. In this case, like a turtle, the executable is built into a filesystem (like squashfs) and carries around all of its dependent libraries and support files in its binary as separate files. This is what appimage does. The file and ldd commands can sometimes tell you how a binary is constructed as described above. For example: $ file /bin/ls /bin/ls: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=897f49cafa98c11d63e619e7e40352f855249c13, for GNU/Linux 3.2.0, stripped $ ldd /bin/ls linux-vdso.so.1 (0x00007fff95ae1000) libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007fc6aebb8000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fc6ae990000) libpcre2-8.so.0 => /lib/x86_64-linux-gnu/libpcre2-8.so.0 (0x00007fc6ae8f9000) /lib64/ld-linux-x86-64.so.2 (0x00007fc6aec50000)This binary is for the x86_64 architecture and is dynamically linked for version 2 of the dynamic linker and requires the specific versions of the various shared libraries listed above... (Note: major versions must match, minor versions aren't even listed above.) Compare this to: $ file /bin/busybox /bin/busybox: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, BuildID[sha1]=36c64fc4707a00db11657009501f026401385933, for GNU/Linux 3.2.0, stripped $ ldd /bin/busybox not a dynamic executableThis statically linked binary will run on any linux kernel that supports x86_64 ELF version 1. $ file prusaslicer.AppImage prusaslicer.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, stripped $ ldd prusaslicer.AppImage not a dynamic executableThis appears to be a dynamically linked executable, but doesn't have any shared library dependencies other than the dynamic linker itself. This is actually a containerized executable, and when it is run, it mounts the internal filesystem and runs the real executable inside. The only clue to this (without running it) is the filename (and size). After running it, it is possible to examine the fuse mounted filesystem, although this behind the scenes work is somewhat hidden. Looking at the specific example you asked about... I downloaded a x86_64 linux version of node.js and found inside the following: $ file bin/node bin/node: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=e8b23b15ec6280a0d4838fbba1171cb8d94667c5, with debug_info, not stripped $ ldd bin/node linux-vdso.so.1 (0x00007ffd633f0000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007efec3198000) libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007efec2f6c000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007efec2e85000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007efec2e65000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007efec2e60000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007efec2c36000) /lib64/ld-linux-x86-64.so.2 (0x00007efec31e5000)So, being a dynamically linked executable with a fair number of external shared library dependencies, this will limit what systems it can run on, but likely the range of versions that have these specific libraries (or can get them installed) is probably fairly wide. These are all base libraries that most linux systems should have, and they have been stable enough that a wide range of OS versions have these versions of the libraries. Sometimes if you are missing some of these, installing the lsb-base or lsb-core package for your linux flavor will make them available. The LSB (Linux Standard Base) is an ABI standard that includes a base set of standard libraries and other items that becomes a lowest common denominator for all linux distributions that want to provide a common target for third party developers. Note that if you have an executable that won't run but says it should, you can do detective work like the above and determine what is missing on your system and install the missing libraries.
If you go to the download site of nodejs you can select Linux Binaries (x64) which is a tar archive that (among other files) contains a bin/ folder with a binary called nodejs. How is nodejs able to provide a "generic" binary that supposedly (or seemingly) is able to run on any linux distro? I know they don't make that claim on their site, but that is what I'm assuming, since there isn't a specific OS release (like nodejs for Ubuntu, Debian etc.). Reading up on the topic (binary compatibility between Linux Distros) I came across multiple answers that basically say that this isn't really a thing:No, Debian and Ubuntu are not binary compatible. Debian and Ubuntu may use different compilers with different ABI, different kernel versions, different libraries, different packages/version etc. As not all Ubuntu packages are in Debian (and vice versa) deb packages may also depend on uninstallable versions....So no technically they are not binary compatible.Answer to a question "Is Ubuntu LTS binary compatible with Debian?" I understand some parts about ABI like architecture, calling conventions, system calls etc. and it makes sense to me, that if the same architecture and calling conventions are used, that there's some compatibility given because at the heart of every linux distro is the linux kernel. However, I'm still struggling to understand why people say that two linux distros are not binary compatible. What is it that is not compatible? And why is nodejs seemingly able to provide just one binary release ''for all'' linux distros? This is more an educational question and not about "how can I make a binary X compatible with distros x, y and z".
How does nodejs achieve seemingly binary compatibility between different linux distros
Using any mount system, you want to avoid situations where Nautilus lists the directory containing a mount that may or not be mounted. So, with autofs, don't create mounts in, for instance, /nfs. If you do, when you use Nautilus to list the 'File System' it will try to create whatever mounts should exist in /nfs, and if those mount attempts fail it takes minutes to give up. So what I did was change auto.master to create the mounts in /nfs/mnt. This fixed the problem for me. I only get a long delay if I try to list the contents of /nfs/mnt, which I can easily avoid.
I'm running a small server for our flat share. It's mostly a file server with some additional services. The clients are Linux machines (mostly Ubuntu, but some others Distros too) and some Mac(-Book)s in between (but they're not important for the question). The server is running Ubuntu 11.10 (Oneiric Ocelot) 'Server Edition', the system from which I do my setup and testing runs the 11.10 'Desktop Edition'. We where running our shares with Samba (which we are more familiar with) for quite some time but then migrate to NFS (because we don't have any Windows users in the LAN and want to try it out) and so far everything works fine. Now I want to setup auto-mounting with autofs to smooth things up (up to now everyone mounts the shares manually when needed). The auto-mounting seems to work too. The problem is that our "server" don't run 24/7 to save energy (if someone needs stuff from the server s/he powers it on and shuts it down afterwards, so it only runs a couple of hours each day). But since the autofs setup the clients hang up quit often when the server isn't running.I can start all clients just fine, even when the server isn't running. But when I want to display a directory (in terminal or nautilus), that contains symbolic links to a share under /nfs while the server isn't running, it hangs for at least two minutes (because autofs can't connect to the server but keeps trying, I assume). Is there a way to avoid that? So that the mounting would be delayed untill a change into the directory or till content of that directory is accessed? Not when "looking" at a link to a share under /nfs? I think not, but maybe it is possible not to try to access it for so long? And just give me an empty directory or a "can't find / connect to that dir" or something like that.When the server is running, everything works fine. But when the server gets shut down, before a share got unmounted, tools (like df or ll) hang (assuming because they think the share is still on but the server won't respond anymore). Is there a way to unmount shares automatically, when the connection gets lost?Also the clients won't shutdown or restart when the server is down and they have still shares mounted. They hang (infinitely as it seems) in "killing remaining processes" and nothing seems to happen.I think it all comes down to neat timeout values for mounting and unmounting. And maybe to remove all shares when the connection to the server gets lost. So my question is: How to handle this? And as a bonus: is there a good way to link inside /nfs without the need to mount the real shares (an autofs option or maybe using a pseudo FS for /nfs which gets replaced when the mount happens or something like that)? My Setup The NFS setting is pretty basic but served us well so far (using NFSv4): /etc/default/nfs-common NEED_STATD= STATDOPTS= NEED_IDMAPD=YES NEED_GSSD=/etc/idmapd.conf [General] Verbosity = 0 Pipefs-Directory = /var/lib/nfs/rpc_pipefs Domain = localdomain [Mapping] Nobody-User = nobody Nobody-Group = nogroup/etc/exports /srv/ 192.168.0.0/24(rw,no_root_squash,no_subtree_check,crossmnt,fsid=0)Under the export root /srv we got two directories with bind: /etc/fstab (Server) ... /shared/shared/ /srv/shared/ none bind 0 0 /home/Upload/ /srv/upload/ none bind 0 0The 1st one is mostly read only (but I enforce that through file attributes and ownership instead of NFS settings) and the 2nd is rw for all. Note: They have no extra entries in /etc/exports, mounting them separately works though. On the client side they get setup in /etc/fstab and mounted manually as needed (morton is the name of the server and it resolves fine). /etc/fstab (Client) morton:/shared /nfs/shared nfs4 noauto,users,noatime,soft,intr,rsize=8192,wsize=8192 0 0 morton:/upload /nfs/upload nfs4 noauto,users,noatime,soft,intr,rsize=8192,wsize=8192 0 0For the autofs setup I removed the entries from /etc/fstab on the clients and set the rest up like this: /etc/auto.master /nfs /etc/auto.nfsFirst I tied the supplied executable /etc/auto.net (you can take a look at it here) but it won't automatically mount anything for me. Then I write a /etc/auto.nfs based on some HowTos I found online: /etc/auto.nfs shared -fstype=nfs4 morton:/shared upload -fstype=nfs4 morton:/uploadAnd it kinda works... Or would work if the server would run 24/7. So we get the hangups when a client boots without the server running or when the server goes down while shares where still connected.
automount nfs: autofs timeout settings for unreliable servers - how to avoid hangup?
Connect your device and find out the UUID of the filesystem by running either blkid or lsblk -f. Add a line to /etc/fstab such as: UUID=05C5-A73A /mnt/32GBkey vfat noauto,nofail,x-systemd.automount,x-systemd.idle-timeout=2,x-systemd.device-timeout=2Then execute: systemctl daemon-reload && systemctl restart local-fs.targetExplanation:noauto - don't mount with mount -a nofail - boot will continue even if this mount point is not mounted successfully x-systemd.automount tell systemd to automount this etnry x-systemd.idle-timeout=2 - wait 2 seconds before unmounting the device after last usage x-systemd.device-timeout=2 - wait only 2 seconds before giving No such device if the device is not connectedNote: There are no quotes around the UUID number. The mount point directory doesn't need to exist - it will be createdFor more information about the options available, see systemd.mount(5)
I want my USB filesystems to automount when I connect the device. How do I setup automount with systemd via /etc/fstab?
systemd: How do I automount a USB filesystem using /etc/fstab?
With fstab, the advantage is the remote filesystem will be mounted on system (when the noauto mount option is not used). Additionally, it depends how the mount point is defined. There are two options which determines the recovery behaviour when the NFS client can't reach the server. With the hard option (default one), the boot process will pause if there is a problem mounting the nfs share and repeated tries are made to mount the share indefinitely. If the soft option is used, then the mount fails after retrans retransmissions have been sent. On the other hand, autofs only mounts nfs shares when they are needed and accessed.
What is the difference between using auto.master and having autofs automount your NFS mountpoints versus just putting the info in fstab? Linux Red-Hat 5/6
Linux: difference between using autofs with NFS and just using fstab
Autofs , is auto mounting filesystem on demand like when ever you need it. NFS is like mounting a complete partition remotely and you will have availability of whole content of the partition. But there are few advantages with autofs over nfs Advantages of AutoFS 1 Shares are accessed automatically and transparently when a user tries to access any files or directories under the designated mount point of the remote filesystem to be mounted. 2 Booting time is significantly reduced because no mounting is done at boot time. 3 Network access and efficiency are improved by reducing the number of permanently active mount points. 4 Failed mount requests can be reduced by designating alternate servers as the source of a filesystem.
What method is best to use for mounting a NFS share from another machine? Mount using /etc/fstab entry or Mount using autofs? what is the difference between them?
NFS mount with fstab vs autofs
After reading through the man pages for way longer than I wanted, I came to the conclusion that you can't do it with one file. The power of +dir: is that it lets you add files to set the configuration so you don't have to edit any package files. Anything in /etc/auto.master.d is literally included into /etc/auto.master and uses that syntax. The actual mount has to be in it's own file and has a different syntax. Here is my /etc/auto.master.d/tools.autofs: /top/dir /etc/auto.toolsAll it does is to place a secondary file into the directory tree and then reference a second file with the mount details. Here is /etc/auto.tools: tools -ro nfsserver:/top/dir/toolsThat works, but I eventually settled on using systemd.automount in this case. It works fine for simple mounts like this, and means one less package to install and configure. For what it's worth, it also needs two files to get everything configured.
I'm trying to setup autofs 5 on Debian 9 (Stretch). I want to mount nfsserver:/top/dir/tools to my /top/dir/tools Read only is fine in this case and I'm not worried about uid mapping. auto.master has a line: +dir: /etc/auto.master.dI'm guessing that there is a one line file I can stick in /etc/auto.master.d that sets up the above mount. The man pages are a bit hard to follow here, but I'm guessing someone has done this and it's probably easy. Does anyone have sample file from /etc/auto.master.d or an example of a simple autofs mount?
Need example use of autofs.master.d/
You could try rwsnoop (http://dtracebook.com/index.php/File_System:rwsnoop) to monitor i/o access using dtrace: # rwsnoop - snoop read/write events. # Written using DTrace (Solaris 10 3/05). # # This is measuring reads and writes at the application level. This matches # the syscalls read, write, pread and pwrite.good luck!
I have a Solaris 10 server with autofs-mounted home dirs. On one server they are not unmounted after the 10 min timeout period. We've got AUTOMOUNT_TIMEOUT=600 in /etc/default/autofs, I ran automount -t 600, disabled and re-enabled svc:/system/filesystem/autofs:default service and nothing seems to work. My suspicion is that something on the system is periodically accessing all the mounted filesystems, maybe checking if they are accessible, and thus resetting the automounter timeout that in turn never expires. This is supported by a test I just did - if I set the timeout to 10 seconds the mountpoints are unmounted, looks like 10 sec is shorter than the period in which that something is doing the checks and the timer has a chance to expire. The question is how can I find what process is doing that? The server is a heavily used production system and I can't do any dangerous experiments on it. Note that the filesystems are not kept open and can be manually unmounted. That something is probably going mountpoint by mountpoint, cd in, cd out, move on, often enough to prevent automount from unmounting it. But it doesn't keep it open and therefore is not visible with lsof or fuser -c. I want to catch it or record it as soon as it accesses the mountpoints to know what's doing it. FWIW it's a Solaris 10 zone on rather beefy Solaris 10 host (Sparc / M5000).
What process is accessing a mounted filesystem sporadically?
I've done a lot of work with autofs and mounting a variety of different types of resources using it. You can check out the man page for autofs which does answer some of your questions if you can keep straight that when they're referring to $USER in the documentation, they're referring to the user that's running the autofs daemon. These are the variables that you get by default: Variable SubstitutionThe following special variables will be substituted in the key and location fields of an automounter map if prefixed with $ as customary from shell scripts (Curly braces can be used to separate the field name): ARCH Architecture (uname -m) CPU Processor Type HOST Hostname (uname -n) OSNAME Operating System (uname -s) OSREL Release of OS (uname -r) OSVERS Version of OS (uname -v) autofs provides additional variables that are set based on the user requesting the mount:USER The user login name UID The user login ID GROUP The user group name GID The user group ID HOME The user home directory HOST Hostname (uname -n) Additional entries can be defined with the -Dvariable=Value map-option to automount(8).You'd probably be tempted to use the -DUSER=$USER but this will only set $USER inside the autofs map file to the user that started the autofs daemon. The daemon is usually owned by a user such as root or a chrooted user specifically setup for autofs. NOTE #1: a autofs file is comprised of a key and a value. The variables are only allowed for use within the value portion of a entry. NOTE #2: If the -D=... switch does not override a built-in variable then $USER or $UID would contain the value of the person's $USER & $UID that is accessing the mount. Limiting access to the CIFS share Regarding your question of how to limit access to a CIFS mount, I don't see a way to accomplish this with autofs. The credentials used to mount a CIFS share are used throughout the duration that the share is mounted. In effect, autofs, running it's daemon automount as say root, is "equivalent" to the credentials of the CIFS user. This isn't what I would consider typical behavior for autofs and is a by-product of using mount.cifs. Typical autofs behavior would respect the permissions on the other end of the mount, whereas with mount.cifs it does not. Bottom-line I think you're out of luck accomplishing your setup using autofs. I think you're going to have to use fuse if you truly want each user to be accessing CIFS shares using their own credentials.
(crosspost from SF, where I wasn't getting much joy) I have a CentOS 6.2 box up and running and have configured autofs to automount Windows shares under a /mydomain folder, using various howtos on the internet. Specifically, I have three files: /etc/auto.master # ... /mydomain /etc/auto.mydomain --timeout=60 # .../etc/auto.mydomain * -fstype=autofs,-DSERVER=& file:/etc/auto.mydomain.sub/etc/auto.mydomain.sub * -fstype=cifs,uid=${UID},gid=${EUID},credentials=${HOME}/.smb/mydomain ://${SERVER}/&This works and allows each user to specify their own credentials in a file under their home directory. However, the mounts they create are then available to everyone, with the original user's credentials, until the timeout is reached. This is less than ideal, so I've been looking at trying to do one of the following:Configure autofs so that the mounts are local to each user but under the same path, so they can each simultaneously access /mydomain/server1 with their own credentials Configure autofs so that the mount points are under each users' home folder, so they can each simultaneously access ~/mydomain/server1 with their own credentials Configure autofs so that the mounts are under a user-named folder, so they can simultaneously access /mydomain/$USER/server1 with their own credentials (but I would also need to ensure that /mydomain/$USER is 0700 to the given $USER)So far, I can't see any way of doing #1, but for #2 or #3, I've tried changing the entry in /etc/auto.master so that the key is either ${HOME}/mydomain or /mydomain/${USER}, but neither have worked (the first showed no matching entry in /var/log/messages and the second did not appear to do the variable substitution). Am I missing something obvious? (PS: Bonus props if you can provide a way to avoid the need for a plain-text credentials file -- maybe a straight prompt for username/domain/password, or maybe even some kerberos magic?) (PPS: I have looked briefly at smbnetfs, but I couldn't get it to configure/make -- it asks for fuse >= 2.6 even though I have v2.8.3 according to fusermount --version -- and I couldn't find a released version for yum install) (PPPS: I also briefly looked at the supplied /etc/auto.smb but it looked like it would suffer the same sharing issues?)
Using autofs to mount under each users' home directory
I suppose no. The .mount/.automount unit name has to be equal to the mount path, escaped with systemd-escape --path. And the only way in systemd to instantiate units is "template syntax" of a form [emailprotected]. Hence it is at least not possible to have a dynamically instantiated mount unit. Just use autofs. systemd is not a replacement for everything.
I’m running jessie/sid with systemd 208 and try to convert the following wildcard autofs configuration to either an /etc/fstab or .mount/.automount definition. $ cat /etc/auto.master /home/* -fstype=nfs homeserver:/exp/home/&(homeserver runs a Solaris with each subdirectory in /exp/home/ being a separate share.) Is there a way to emulate wildcard maps with systemd?
Wildcard automounts with systemd
The /etc/auto.master file is not the place to set the remote NFS directory path. /etc/auto.master expects to be given a map file or directory. From the auto.master man page:The auto.master map is consulted to set up automount managed mount points when the autofs(8) script is invoked or the automount(8) program is run. Each line describes a mount point and refers to an autofs map describing file systems to be mounted under the mount point.So, create a file called /etc/auto.remote (you can change "remote" to whatever you want). Place your mount options in that file. It should have the following format: share_name mount_options server:/remote/path/to/shareHere's one from my system for example: movies -rw,soft,intr,bg,rsize=8192,wsize=8192 192.168.0.72:/nfs_shares/moviesYou can then reference the auto.remote file from your auto.master: /path/to/mount_point /etc/auto.remote --timeout 60 --ghost
I'm struggling to get automount to work as desired. If I run the following: sudo mount -t nfs server:/path/to/share /path/to/mount_point I get the mount appearing fine. However, if I add the following line to my auto_master: /path/to/mount_point server:/path/to/share it creates the mount point directory but the contents aren't visible. When I observe the output of the mount command, they are different:Using the first (manual mount) approach the following entry is returned by mount: server:/path/to/share on /path/to/mount_point (nfs)Using automount I get the following entry returned by mount: map server:/path/to/share on /path/to/mount_point (autofs, automounted, nobrowse)I assumed that by default, automount mounts via NFS and is equivalent to the manual mount. What is the correct way to use automount to acheive the behaviour of the (correctly working) manual mount? The share is hosted on a Linux NIS domain and I am accessing from a Mac (BSD Unix).
Automount not equivelent to mount?
In your constellation /MOUNT_FOLDER is the base directory where subdirectories will be mounted by the indirect mount-map auto.ext-usb. See man 5 autofs for further details. Example: usbdisk -fstype=vfat,uid=yourworkingusername :/dev/disk/by-id/thediskidIf you cd /MOUNT_FOLDER/usbdisk your usbdisk will be mounted there (I assume it is vfat formatted). You can use /etc/fstab instead - but then you will to have to mount "by hand". The entry in /etc/fstab looks like this: /dev/disk/by-id/thediskid /MOUNT_FOLDER vfat defaults,user,noauto 0 0After that you can mount the USB-Disk as ordinary user with mount /MOUNT_FOLDER.
GoalI have a USB drive -- let's say the drive's ID is /dev/disk/by-id/thediskid I would like to mount the drive -- by ID -- to a folder (let's call it /MOUNT_FOLDER)QuestionWhat is the best way to do this using autofs?Current Attempt /etc/auto.master: +auto.master /localam auto.linux /[another mount] [auto.othermount] --timeout=5 -- ghost /MOUNT_FOLDER auto.ext-usb --timeout=5 / -/etc/auto.ext-usb: /MOUNT_FOLDER /dev/disk/by-id/thediskidI know I'm missing something but I can't seem to get a good lead on what the proper syntax is. New to Linux; appreciate a pass if I'm overlooking something simple. Thanks!
How Do I use autofs to map a USB drive by its ID?
You could change AuthorizedKeysFile to something outside the home directory, for example /etc/ssh/keys/%u/authorized_keys. Then the keys would be available before /home/%u is mounted. From the man page of sshd_config: AuthorizedKeysFile Specifies the file that contains the public keys that can be used for user authentication. The format is described in the AUTHO- RIZED_KEYS FILE FORMAT section of sshd(8). AuthorizedKeysFile may contain tokens of the form %T which are substituted during connection setup. The following tokens are defined: %% is replaced by a literal '%', %h is replaced by the home directory of the user being authenticated, and %u is replaced by the user- name of that user. After expansion, AuthorizedKeysFile is taken to be an absolute path or one relative to the user's home direc- tory. Multiple files may be listed, separated by whitespace. The default is ``.ssh/authorized_keys .ssh/authorized_keys2''.
I want a setup where a user home directory is mounted on login (autofs). This is working with password based authentication. However, I want passwordless authentication, by generating public keys. Passwordless authentication works well if the user's home directory is stored locally (no autofs). In my case the keys are on the remote server and the home directory is mounted only when accessed and the server cannot verify you unless it has the public key. Is this even possible? (Both servers are running Solaris 10 x86 on VirtualBox.)
Autofs home directories with passwordless logins
Answering my own question. So apparently the correct way of doing this is through something called "multi-map" or "multiple-mount map", according to man 5 autofs. <sarcasm>Oh such clear and predictable name.</sarcasm> It's amazing that the words "nested" or "sub(-)director(y/ies)" do not appear at all in man 5 autofs.
Is it ok to have nested directories managed by autofs? e.g./nfs/zfs /nfsOn my Debian 8 machine, I have /etc/auto.zfs like this: repo -fstype=nfs,rw 192.168.0.2:/repoand /etc/auto.nfs like this: foo -fstype=nfs,rw 192.168.0.3:/fooMy /etc/auto.master.d/nfs.autofs then references these files like this: /nfs/zfs /etc/auto.zfs /nfs /etc/auto.nfsIs this supposed to work? Any caveat? My main fear is that autofs somehow completely remove /nfs/zfs automatically at some point.
Nested directories managed by autofs?
You always want to use AutoFS if the storage resource is not up when a system boots. This is one of the primary functions that AutoFS provides. Without it a server will attempt to mount a storage resource, timeout, and never try again if you just mount it at boot up. Additionally AutoFS allows you the ability to take the storage backend down and the clients will essentially wait forever until it comes back. This Red Hat Storage Admin Guide pretty much sums it up. Additionally AutoFS isn't just for automounting NFS shares, you can use it to automount most things including CIFS (Samba), and even seldom used ISO files for Linux Distros, for example.
Consider an environment of three Solaris servers (Server1-3):Where Server1-3 all share a mount nas-server.company.com:/vol/appls-backup, for storing backups and recovery related files at periodic intervals (once every few hours). Here I think the choice of using AutoFS has obvious advantages - here the automounter has the purpose of conserving local system resources and of reducing the coupling between systems which share filesystems with a number of servers [Reference]. Now each server has a share mounted onto some mount-point where the application assemblies and live data is located. I placed these mounts in /etc/vfstab so the mounting would occur once on system boot. As they are accessed continuously, AutoFS might cause delays if it automatically un-mounted the share at some time. Now for case #2, in what situation would AutoFS be more desirable or required than using vfstab?
In what situation would it be more beneficial to use AutoFS over vfstab for shares dedicated to just one server?
It seems that that the key clue in the log messages is that the probes logged as having "proto 6" succeed and those logged as having "proto 17" fail. The 6 and 17 turn out to be the IP transport protocol numbers representing TCP and UDP, respectively. Although NFS is traditionally served over UDP, service over TCP is supported by most stacks, and the server in this case was always configured to serve NFS only over TCP. This did not present a problem, however, until an as-yet uncharacterized change went in at the server that had the result that nfs/udp traffic was afterward silently discarded instead of being rejected with the appropriate ICMP response. That might very well have arisen from a firewall change, but I cannot at this point rule out an application-level change at the server. In any event, I resolved the problem on the client side, by adding proto=tcp to the mount options of each affected filesystem in the autofs map file. Autofs was clever enough to forgo the UDP-flavor probes once that option was in place. Not only is the problem solved, but mount performance now seems even a little better than it was before the timeout problem started.
I have an infrastructure of CentOS 6 and CentOS 7 client machines that rely on autofs to automount various NFS filesystems exported by a service elsewhere in my organization. Recently, the clients began manifesting a troublesome behavior in which automounting these filesystems became very slow -- whereas mounting used to go through in a few seconds, it began to take nearly two minutes. I think I have traced down the problem to a combination of factors:The hostname of the server has a large number of distinct resolutions (32) When the hostname has multiple resolutions, autofs probes each one to try to reject unresponsive ones and choose the one among the rest that currently has the best response time Exactly one of the two probe RPCs issued to each server by autofs appears to be consistently timing out for all of my servers.Here's a representative excerpt of the debug log:Jul 13 15:48:18 myclient automount[17485]: get_nfs_info: called with host nfs.my.org(10.220.8.68) proto 6 version 0x20 Jul 13 15:48:18 myclient automount[17485]: get_nfs_info: nfs v3 rpc ping time: 0.000290 Jul 13 15:48:18 myclient automount[17485]: get_nfs_info: host nfs.my.org cost 289 weight 0 Jul 13 15:48:18 myclient automount[17485]: get_nfs_info: called with host nfs.my.org(10.220.8.68) proto 17 version 0x20 Jul 13 15:48:21 myclient automount[17485]: get_nfs_info: called with host nfs.my.org(10.220.8.84) proto 6 version 0x20That shows one complete probe and the beginning, three seconds later, of the following one. In addition to the delay, I don't see any information about a response to the second RPC. That says "timeout" to me. Although the timeouts are individually only 3 seconds, multiplying that by 32 machines means over a minute and a half of timeout before the mount itself is actually attempted. The clients are running the standard NFS client stacks for CentOS 6 and 7: nfs-utils 1.2.3 and autofs 5.0.5 or nfs-utils 1.3.0 and autofs 5.0.7, respectively, as packaged by CentOS. Clients are under configuration management, so I am confident that they have had no software or configuration change since well before the problem began manifesting. The servers are running the Ganesha userspace NFS stack, and in particular, it may be relevant that they do not support NFS4, though this has not presented a problem in the past. Server management claims that no configuration change has been intentionally made, but allows that routine software updates may have been installed. So, finally, the question is as given in the headline: how can I resolve the mount delays caused by the host probing? Is there a relevant configuration setting in Ganesha whose default may have changed? Alternatively, is there a way to configure autofs to avoid trying the failing RPCs? Or have I perhaps mis-identified the problem? Turning on the autofs config parameter use_hostname_for_mounts seems to work around the issue, but as I understand it, this comes at the cost of losing resilience against failures and overloading of the individual servers. Is there no better way?
How can I resolve autofs mount delays related to host probing?
You can display or configure SMF autofs properties by using the sharectl command. For example # sharectl get autofs timeout=600 automount_verbose=false automountd_verbose=false nobrowse=false trace=0 environment= # sharectl set -p timeout=200 autofsYou can check out this link for details if you like to setup a permanent mount then why don't you use direct maps. Here is a detail link for direct autofs maps.
I have a share mounted on an Oracle Solaris application server that read/writes data onto the share periodically. I have automounted it with a timeout of over several minutes but is there a way to ensure it never unmounts the share? Looking at the man pages on the Solaris box and the reference docs, it seems there is no such option unless I missed it somewhere. On Linux, it seems automount offers such a facility - we need to set the timeout as 0, to disable unmounting of the share.
Can AutoFS keep a share permanently mounted on Solaris?
It took me a few days to figure this out, so I just wanted to share the things I discovered, in case others have a hard time with AutoFS.Ensure you can manually mount the share using the mount command Ensure AutoFS is active and running on both the client and server. In the /etc/auto.master file, ensure the first field contains the client mount point, such as /mnt. Ensure the permissions of /etc/auto.your-map is -rw-r--r-- (644). If using Samba and CIFS, ensure smb is active and running on the server. If using NFS, ensure NFS is active and running on both the client and server. If possible, disable Firewalld and Iptables on both the client and server. If possible, disable SELinux on both the client and server. On the client, list the mount point, which will trigger AutoFS to automount the share. Add OPTION="--debug" to /etc/sysconfig/autofs to add debugging events to /var/log/messages.
I have a CentOS 7 server named server1.example.com using Samba to share /srv/samba/share. //server1.example.com/share is the path to the share. I am not able to get CentOS clients to mount //server1.example.com/share on /mnt/myShare using AutoFS.AutoFS is active and running on both the CentOS clients and server. Both Firewalld and Iptables are disabled on both the CentOS clients and server. SELinux is disabled on both the CentOS clients and server. The permissions of /srv/samba/share and /mnt/myShare are 777. CentOS clients are able to mount the share as CIFS using the mount command. CentOS clients are able to mount the share as CIFS using /etc/fstab. CentOS clients are able to mount an NFS share using AutoFS.The CentOS client has the following configuration. /etc/auto.master/mnt /etc/auto.cifs --timeout=60 --ghost/etc/auto.cifsmyShare -fstype=cifs,username=myUsername,password=myPassword ://server1.example.com/shareThe mount command shows that AutoFS wants to mount /etc/auto.cifs.~]# mount /etc/auto.cifs on /mntHowever, AutoFS is not mounting //server1.example.com/share on /mnt/myShare. I am unsure what needs to be done for AutoFS to mount the share on the CentOS clients.
AutoFS fails to mount Samba CIFS share
This seems to be a problem for people going back to 2004, and which has recently been re-addressed March 2017. It is due to "user-friendly" tools like Nautilus seeking to implement a hidden files feature. To do this it looks for a file called .hidden at the top of a filesystem for a list of filenames to hide. This causes autofs to try to mount this file from your server. (There is similar code in glib to implement the same feature). Perhaps you can try revising the * map in your /etc/auto.home to be less encompassing. Or if you configure your desktop to not ignore hidden files, pehaps it will not look for the magic file. I'm not able to try any working solutions for the moment.
I am mounting home drives using autofs from a file server: auto.master /home /etc/auto.homeauto.home * tyrell:/nfshome/This seems to work great, but on the file server, tyrell I'm constantly getting this error: Apr 27 13:38:08 tyrell rpc.mountd[1145]: authenticated mount request from 192.168.1.164:691 for /nfshome/.hidden (/nfshome) Apr 27 13:38:08 tyrell rpc.mountd[1145]: can't stat exported dir /nfshome/.hidden: No such file or directoryWhy is it looking for a .hidden folder and how can I get the client to stop trying to mount it? The clients are running Ubuntu 16.04 with Unity Desktop.
autofs ~/.hidden: No such file or directory
I had this same annoyance. Eventually, I ended up switching the automounting to systemd. You have to create a file in /etc/systemd/system for the mount. Naming conventions require that to be named after the mount point with the path separator replaced by dashes. Since you have a dash in the name already, you'd have to figure out how to escape that. In my case I added /etc/systemd/system/smb-Tomato.mount. [Unit] Description=cifs mount script Requires=network-online.target After=network-online.service[Mount] What=//<IP of server>/<path on server> Where=/smb/Tomato Options=guest,uid=<my UID on client>,gid=<my GID on client>,rw Type=cifs [Install] WantedBy=multi-user.targetThen I had to enable and start this mount: sudo systemctl enable smb-Tomato.mount sudo systemctl start smb-Tomato.mount Since I wanted automount I also created a file /etc/systemd/system/smb-Tomato.automount containing: [Unit] Description=cifs automount script Requires=network-online.target After=network-online.service[Automount] Where=/smb/Tomato TimeoutIdleSec=10[Install] WantedBy=multi-user.targetand simiarly enable and start this sudo systemctl enable smb-Tomato.automount sudo systemctl start smb-Tomato.automount So far I'm satisfied: the annoying broadcast message has disappeared. After doing this I figured that just using the 'guest' mount option might have done the trick, but since I already have what I was after I did not revert to try this out.
I have a samba share that does not require password. Here are non-default lines in my smb.conf: [global] map to guest = Bad User[distr-ro] path = /home/distr public = yes writable = noOn RHEL6 I added this line to /etc/auto.master and it worked: /cifs /etc/auto.smb --timeout=60But on Centos 7 attempt to access the share hangs and I see a broadcast message [root@wc8 etc]# ls /cifs/okdistr/distr-roBroadcast message from root@wc8 (Wed 2016-03-02 03:51:45 EST):Password entry required for 'Password for root@//okdistr/distr-ro:' (PID 10006). Please enter password with the systemd-tty-ask-password-agent tool!
autofs asks password for passwordless samba share on Centos 7
I haven't myself checked that out, but you should be able to blacklist the autofs4 module. That means you should add blacklist autofs4into a modprobe config file, e.g. into a new file /etc/modprobe.d/blacklist-autofs4.conf. I found this thread https://lists.fedoraproject.org/pipermail/devel/2011-June/152585.html that suggests blacklisting that module.
this line is from the output of dmesg -e on Ubuntu 15.04 using systemd and UEFI: [ +14.874691] systemd[1]: Inserted module 'autofs4'as it shows it takes 14 seconds to load. To my knowledge, autofs4 is used to auto-mount partitions on start-up, please correct me if I am wrong. I don't need any partition mounted on start-up. The question is, is it safe to disable autofs4 ? if yes how can I do that? UPDATE: the above output comes from dmesg -e but if I try only dmesg then I get this: [ 6.883649] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null) [ 21.495107] systemd[1]: Module 'autofs4' is blacklistedas you can see that system has to wait for 14 seconds before autofs4 is loaded
how to disable autofs4 [closed]
I have found some help on the #sssd IRC channel. Apparently the user is log entry does not mean the user connecting, but just the automount map it is looking for. It appeared I had a misconfiguration on AD. By raising the domain debug_level to 6 in my sssd.conf as follows: ... [domain/example.com] debug_level = 6 ...I was able to view the LDAP query made to my AD server. It appears I had to place my nisObjects under my nisMap's, I had them placed in the same OU=automount. So I moved these objects and all is working fine now!
I am trying to set up SSSD to get automount maps from Active Directory. I think my settings are correct but it uses the wrong username to query AD. It takes whatever that is set as "mapname" (behind the + sign) from /etc/auto.master, for example +auto.master results in the following debug log (sssd_autofs debug_level=6): [sssd[autofs]] [accept_fd_handler] (0x0400): Client connected! [sssd[autofs]] [sss_cmd_get_version] (0x0200): Received client version [1]. [sssd[autofs]] [sss_cmd_get_version] (0x0200): Offered version [1]. [sssd[autofs]] [sss_autofs_cmd_setautomntent] (0x0400): Got request for automount map named [emailprotected] [sssd[autofs]] [sss_parse_name_for_domains] (0x0200): name '[emailprotected]' matched expression for domain 'example.com', user is auto.master [sssd[autofs]] [setautomntent_send] (0x0400): Requesting info for automount map [auto.master] from [example.com] [sssd[autofs]] [lookup_automntmap_step] (0x0400): Requesting info for [[emailprotected]] [sssd[autofs]] [sysdb_get_map_byname] (0x0400): No such map [sssd[autofs]] [lookup_automntmap_step] (0x0080): No automount map [auto.master] in cache for domain [example.com] [sssd[autofs]] [sss_dp_issue_request] (0x0400): Issuing request for [0x406840:0:[emailprotected]] [sssd[autofs]] [sss_dp_get_autofs_msg] (0x0400): Creating autofs request for [example.com][4105][mapname=auto.master] [sssd[autofs]] [sss_dp_internal_get_send] (0x0400): Entering request [0x406840:0:[emailprotected]] [sssd[autofs]] [lookup_automntmap_step] (0x0400): Requesting info for [[emailprotected]] [sssd[autofs]] [sysdb_autofs_entries_by_map] (0x0400): Getting entries for map auto.master [sssd[autofs]] [sysdb_autofs_entries_by_map] (0x0400): No entries for the map [sssd[autofs]] [lookup_automntmap_step] (0x0400): setautomntent done for map auto.master [sssd[autofs]] [sss_autofs_cmd_setautomntent_done] (0x0400): setautomntent found data [sssd[autofs]] [sss_dp_req_destructor] (0x0400): Deleting request: [0x406840:0:[emailprotected]] [sssd[autofs]] [sss_autofs_cmd_getautomntent] (0x0400): Requested data of map [emailprotected] cursor 0 max entries 512 [sssd[autofs]] [sss_autofs_cmd_getautomntent] (0x0400): Performing implicit setautomntent [sssd[autofs]] [sss_parse_name_for_domains] (0x0200): name '[emailprotected]' matched expression for domain 'example.com', user is auto.master [sssd[autofs]] [setautomntent_send] (0x0400): Requesting info for automount map [auto.master] from [example.com] [sssd[autofs]] [lookup_automntmap_step] (0x0400): Requesting info for [[emailprotected]] [sssd[autofs]] [sss_dp_issue_request] (0x0400): Issuing request for [0x406840:0:[emailprotected]] [sssd[autofs]] [sss_dp_get_autofs_msg] (0x0400): Creating autofs request for [example.com][4105][mapname=auto.master] [sssd[autofs]] [sss_dp_internal_get_send] (0x0400): Entering request [0x406840:0:[emailprotected]] [sssd[autofs]] [lookup_automntmap_step] (0x0400): Requesting info for [[emailprotected]] [sssd[autofs]] [sysdb_autofs_entries_by_map] (0x0400): Getting entries for map auto.master [sssd[autofs]] [sysdb_autofs_entries_by_map] (0x0400): No entries for the map [sssd[autofs]] [lookup_automntmap_step] (0x0400): setautomntent done for map auto.master [sssd[autofs]] [getautomntent_implicit_done] (0x0020): Cannot get map after setautomntent succeeded? [sssd[autofs]] [sss_dp_req_destructor] (0x0400): Deleting request: [0x406840:0:[emailprotected]] [sssd[autofs]] [sss_autofs_cmd_endautomntent] (0x0400): endautomntent called [sssd[autofs]] [client_recv] (0x0200): Client disconnected!Anyone got this working?
SSSD and autofs
You can think of LDAP as a tree (example). Thus, ou=example,dc=hostname1,dc=people will traverse the tree starting at the root dc=people and passing its child dc=hostname1 before arriving at ou=example as a child node of dc=hostname1. If you mix that order LDAP isn't able to traverse the tree. In your second example it will struggle finding the root element dc=example and you'll just get a message telling you that there is no such path in your directory tree. (check your logs.)
I wonder if there is any reason to speficically put the domain controlers in the proper order, when configurung a LDAP server using autofs. For example if specify on a RedHat/CentOS the dc domain controlers in this order : ou=example, dc=hostname1,dc=people the LDAP is active and I may see it. But if by a simple "mistake" I write ou=people, dc=hostname1,dc=exampleI do not see any mounted LDAP server, what it is exactly the meaning of the ou and dc domain names and where I could look to see which order I have to follow.
The importance of the order in domain controler names for setting an LDAP server, is there any reason?
I finally found the solution after having problems with dbus after an update. New script: #!/usr/bin/fishfunction usage echo "need at least two arguments" echo " 1. <user name>:[<config>]:<crypt folder>" echo " 2. <mount folder>" exit 1 endif test (count $argv) -lt 2 usage endset split (string split ':' $argv[1]) if test $status -ne 0 usage else if test (count $split) -eq 2 set USER_NAME $split[1] set CONFIG_PATH "" set CRYPT_PATH (realpath $split[2]) else set USER_NAME $split[1] set CONFIG_PATH (realpath $split[2]) set CRYPT_PATH (realpath $split[3]) endset MOUNT_PATH (realpath $argv[2])set PASS (sudo -H -u $USER_NAME bash -c "env DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/`id -u $USER_NAME`/bus secret-tool lookup server $CRYPT_PATH")if test -n $CONFIG_PATH set COMMAND "env ENCFS6_CONFIG=$CONFIG_PATH" endset COMMAND $COMMAND "encfs --public --extpass='echo \'$PASS\'' $CRYPT_PATH $MOUNT_PATH" eval $COMMAND
I have build myself a nice setup: I have encrypted encfs folders that are mountable with autofs, i.e. when I enter the folders they will be automatically decrypted. I have the encfs password added to my keyring and wrote a custom script that extracts the password (/usr/local/sbin/load-encfs see below). The only drawback is that I have to enter my login password to unlock the keyring on the first visit of any of the automounted folders. Every visit of another folder (or the same after the timeout expired) does not prompt me for my pw. Question: Is there any possibility that also the first password prompt can be somehow automated?/etc/autofs/auto.master: I just added this line: /- /etc/autofs/auto.encfs/etc/autofs/auto.encfs: /home/user/Privat -fstype=fuse :load-encfs\#user\:/home/user/encfs-keys/private.xml\:/home/user/Dropbox/.private /home/user/BTU -fstype=fuse :load-encfs\#user\:/home/user/encfs-keys/btu.xml\:/home/user/Dropbox/.btu /home/user/TUD -fstype=fuse :load-encfs\#user\:/home/user/encfs-keys/tud.xml\:/home/user/Dropbox/.tud/usr/local/sbin/load-encfs (fish script):#!/usr/bin/fishfunction usage echo "need at least two arguments" echo " 1. <user name>:[<config>]:<crypt folder>" echo " 2. <mount folder>" exit 1 endif test (count $argv) -lt 2 usage endset split (string split ':' $argv[1]) if test $status -ne 0 usage else if test (count $split) -eq 2 set USER_NAME $split[1] set CONFIG_PATH "" set CRYPT_PATH (realpath $split[2]) else set USER_NAME $split[1] set CONFIG_PATH (realpath $split[2]) set CRYPT_PATH (realpath $split[3]) endset MOUNT_PATH (realpath $argv[2])set PID (ps aux | sed -e '/sed/d;/$USER_NAME.*xinit/!d;s/^.*xserverrc \(:[0-9\.]*\).*/aaa/' | awk '{ print $2 }') if test -n "$PID" set DISPLAY (cat /proc/$PID/environ | tr '\0' '\n' | grep '^DISPLAY=' | sed -r 's/.*=(.*)/\1/') end if test -z "$DISPLAY" set DISPLAY ":0.0" endset PASS (env DISPLAY=$DISPLAY sudo -H -u $USER_NAME secret-tool lookup server $CRYPT_PATH)if test -n $CONFIG_PATH set COMMAND "env ENCFS6_CONFIG=$CONFIG_PATH" endset COMMAND $COMMAND "encfs --public --extpass='echo \'$PASS\'' $CRYPT_PATH $MOUNT_PATH" eval $COMMANDI added the various arguments to be flexible where the config file for encfs is stored.
Automount with autofs, encfs and keyring access
We have several maps, and some of the maps have 10s, 100s of volumes. So it does work for a fleet of 100s of Linux, 10s of other platforms. The only issue we had with old autofs software in the early days was that it could not cope with modifications and updates online. That is changing the NFS volume and mount point during decommissioning old NFS filers. I do not believe performance is an issue. However, one always has to test and confirm. Scalability and performance were enhanced with Linux Autofs 5.x after introducing multithreading. For us our minimal distro is RHEL6 comes already with 5.x. so no it is not bad, you need to benchmark and test it, it will add better flexibility and stability to your environment as the configuration will be centralised "I am hoping that is one reason you would like to do so."
My company literally has thousands of NFS volumes, which we need to mount on a few of our servers. However, we never mount all of them at the same time; typically we mount about 12 at a time. Most of the time, most of the NFS volumes are actually offline. In the past, we have been mounting and unmounting them manually via the command line, as the need arose. We would like to move to using autofs for mounting these volumes. Is it bad to define thousands of autofs rules, one for each NFS volume? Does autofs have a hard limit about the number of rules it can have? Does autofs performance decrease drastically with a large number of rules?
Is it bad to have thousands of autofs rules?
It looks like the problem here were the permission settings on the folder being exported on the server side. So, doing the following on the server allowed me to write from the client: [root@centosserv ~]# chmod 777 /NFSSHAREI did this on fresh installs of both the server and the client. Was experiencing the same problems all over again, and, not even having tried disabling iptables on the server and going through the changes I edited into the question this time, decided to make sure that the permissions to /NFSSHARE were properly set. Seems to have done the trick.
I had previously been able to configure an NFS server on a computer running CentOS 6.6, and to mount the file system in a Virtual Machine with the same OS and using autofs. Last week I did a fresh install of all the OSs I had, and now for some reason I cannot get it to work. The server computer still runs CentOS 6.6, and the Virtual Machine is now running CentOS 7 (I also tried it with another Virtual Machine running Debian Wheezy, but it still didn't work). The server (centosserv) is running on 192.168.1.89, and the client (centoscli, the CentOS 7 one) on 192.168.1.100. The file system I want to share is /NFSSHARE and /NFSSHARE/mydir, and as such the /etc/exports file on the server contains the following: /NFSSHARE 192.168.1.100(fsid=0,rw,sync,no_subtree_check,root_squash,anonuid=1000,anongid=1000) /NFSSHARE/mydir 192.168.1.100(ro,sync,no_subtree_check)If I run showmount -e I get this: [root@centosserv ~]# showmount -e Export list for centosserv: /NFSSHARE/mydir 192.168.1.100 /NFSSHARE 192.168.1.100So everything looks good so far. On the client side, I edited the /etc/auto.master to include the following line: /mnt/nfs /etc/auto.nfs-share --timeout=90And then created the /etc/auto.nfs-share file with the following contents: [root@centoscli ~]# cat /etc/auto.nfs-share writeable_share -rw 192.168.1.89:/ non_writeable_share -ro 192.168.1.89:/mydirThis also seems to be working, given the below output: [root@centoscli ~]# mount | grep nfs-share /etc/auto.nfs-share on /mnt/nfs type autofs (rw,relatime,fd=18,pgrp=2401,timeout=90,minproto=5,maxproto=5,indirect)At this point, /mnt/nfs/writeable_share and /mnt/nfs/non_writeable_share are not mounted, unless I try to access them directly, as per this tutorial (which is the same I had followed the last time I set up the NFS server*). So only after I tried ls -l /mnt/nfs/writeable_share should it be mounted. But the output I get is: [root@centoscli ~]# ls -l /mnt/nfs/writeable_share ls: cannot access /mnt/nfs/writeable_share: No such file or directoryI pinged the server from the client and vice versa, just to check that they could both reach each other, and they seem to do. I did everything exactly the same way I had done the first time round, yet for some reason I cannot get it to work this time. I have tried doing this by editing the /etc/fstab file on the client side and manually instead of using autofs, but it doesn't seem to work that way either. Disabling iptables on the server side makes it work with fstab and manually, but not with autofs yet. What else can I check, or where I have gone wrong?*With the exception of the first three steps, since I have neither a service called nfs-common neither a /etc/default/nfs-common file.EDIT I was checking out this tutorial on a CentOS group on FB that, after the server side is supposedly settled and we're ready to start configuring the client side, says this:Test if you can see NFS server: showmount -e So I'm guessing that using showmount -e on the client I should be able to get some info on the server, or it at least some acknowledgement that I can mount file systems from that server on this client. However, I tried using showmount -e 192.168.1.89 on the client side, and the only message I got was this: clnt_create: RPC: Port mapper failure - Unable to receive: errno113 (No route to host)I'm guessing this could be the problem, but I'm not sure what it means.EDIT 2 After having disabled iptables on the server side, I can now see the exported file systems when I use showmount -e 192.168.1.89 on the client side. Which renders my first edit above moot, I think. However, I am still not able to mount the filesystems using autofs.EDIT 3 Ok, besides having iptables disabled, I have edit both /etc/exports on the server and /etc/auto.nfs-share on the client to look like this: [root@centosserv ~]# cat /etc/exports /NFSSHARE 192.168.1.100(fsid=0,rw,sync,no_subtree_check,root_squash,anonuid=1000,anongid=1000) /NFSSHARE/mydir 192.168.1.100(rw,sync,no_subtree_check,root_squash,anonuid=1000,anongid=1000) /NFSSHARE/mydir/ro 192.168.1.100(ro,sync,no_subtree_check)[root@centoscli ~]# cat /etc/auto.nfs-share writeable_share -fstype=nfs4,rw 192.168.1.89:/mydir non_writeable_share -fstype=nfs4,ro 192.168.1.89:/mydir/roAnd with there I seem to be able to mount the file systems, but not to write from the client: [root@centoscli ~]# touch /mnt/nfs/writeable_share/test_from_client.file touch: cannot touch ‘/mnt/nfs/writeable_share/test_from_client.file’: Permission denied
What's wrong with my NFS setup?
After traying everything I was able to resolve my issue. I gave my auto_nfs config to a colleague of mine. In order to make the thing simpler I send the config over Skype. When he applied it everything worked fine for him. Having this in mind I re-created my auto_nfs file using the content that I just send over Skype, then I called sudo automount -cvagain and everything worked. It seems that there was some kind of issue with the encoding and/or spaces in the original file.
I am having an issue with the Mac OS Sierra. I have added a NFS config in a auto_nfs for 2 NFS shares and the autofs picks only the first one. Here's my auto_master: # # Automounter master map # +auto_master # Use directory service /net -hosts -nobrowse,hidefromfinder,nosuid /home auto_home -nobrowse,hidefromfinder /Network/Servers -fstab /- auto_nfs -nosuid /- -staticHere's my auto_nfs: /build/mount1 -fstype=nfs,noowners,nolockd,noresvport,hard,bg,intr,ro,tcp,nfc nfs://<some hostname>:/mount1 /build/mount2 -fstype=nfs,noowners,nolockd,noresvport,hard,bg,intr,ro,tcp,nfc nfs://<some hostname>:/mount2When I restart the autofs service with this sudo automount -cv I get the following message: automount: /net updated automount: /home updated automount: /build/mount1 updated automount: no unmountsand the mount2 is not mounted under my build directory. If I change the order in auto_nfs to be mount2 followed by mount1, then I get only mount2 mounted. If I put the /- auto_nfs -nosuidline at the end of the auto_mount then nothing works.
Mac OS Sierra 10.12 autofs mounts only the first specified NFS volume
When you stop the autofs service, you're removing the reason your automounts were being unmounted in the first place - for lack of use. This is expected behaviour. So, unless you manually unmount the filesystem, it will stay mounted.
I installed autofs and it's mounting a network drive to a folder in my local linux filesystem. I use -fstype=cifs,rw in the mount command in an /etc/auto.smb.shares file. If I cd into the mounted folder (or if I have got it open in a Windows Samba Share) and I stop autofs systemctl stop autofs the folder stays mounted. I check with systemctl status autofs and it says the autofs process is dead. But I check with the mount command and df -h and the mounted folder is still there and indeed I can cd in and out of the folder from linux or browse in and out from the Windows Samba share. If I manually unmount the drive then it unmounts it. Now if I restart autofs I get the expected behavior again. If I now cd out of the mounted folder and stop autofs it does not mount the folder if I try to cd in to it. Am I just describing standard behavior of autofs or why does a folder stay mounted if I happen to have it open when I stop autofs? Cheers, Flex
Stopping Autofs when in a mounted folder and the folder stays mounted
my solution was to install CIFS-UTILS,because the FS type is opts="-fstype=cifs"sudo apt install cifs-utilsbut most of the tutorials that I follow doesn't include this step,maybe
Automounting not working correctly for CIFS shares; weird results Base on the above question, my issue is that my samba/cifs password do have '*' and '&' characters which point me to the lines: if [ -e "$credfile" ] then opts=$opts",credentials=$credfile" smbclientopts="-A "$credfilewhat would be the correct way to escape the password? in the credfile username='user' password='*pass&word?Secure'this way fail username=user password=\*pass\&word\?Secureor fix the " smbclientopts="-A "$credfile " thanks for the comments. 2020 update: clean pop-os install > apt install samba autofs smbclient > sudo nano /etc/creds/<<host>> > sudo chmod rw-r-r /etc/creds/<<host>> > sudo nano /etc/auto.master ###edit: /smb auto.smb --timeout=300 > sudo systemctl restart autofs.serviceresults: > the ls /smb/<<host>> show all the shares > but ls /smb/<<host>>/<<share>> > ls -l /smb/ccollart/home > ls: cannot open directory '/smb/ccollart/home': No such file or directorysyslog: > Feb 5 11:26:33 pop-os kernel: [10292.285802] CIFS: Attempting to mount //ccollart/home > Feb 5 11:26:33 pop-os kernel: [10292.285816] Unable to determine destination address.I then install winbind:sudo apt install libnss-winbind winbindBTW, my local DNS do add searchdomain = localdomain and DNS do resolve IPv4 both "host" and "host.localdomain"
autofs cifs samba
THE SOLUTION! I've figured it out now. The error messages pointed me in the right direction (again) and digging through google searches I've come across the ntfs-3g package and also the ntfs-config package. The latter provides write ability for NTFS drives and just like that the message Volume is dirty. Mounting read-only. Run chkdsk and mount in Windows. didn't appear again. Accessing the shared drive from the windows machine worked after that like a breeze as well. Another great help was the Nautilus file manager, which made permission handling so much easier by just going to the directory path and selecting "right-click -> Properties" and simply editing the Permissions and Share options from there. (This was clearly the easiest/beginner way possible to do this) For future reference, here are all my configuration files and also the steps I've taken in the CLI:auto.master: /- /etc/auto.ntfs -t=60Linux Manual Page for auto.masterauto.ntfs: /sharing -fstype=ntfs :/dev/sda1Linux Manual Page for automounter mapssmb.conf: Edit the standard configuration and comment everything you DON'T need with hashtag or semicolon. Below are things I've edited.[global] workgroup = WORKGROUP # This was irrelevant for me but I left it activated wins support = yes # Tells samba to act as WINS Server #Recycle bin for mounted drive (useful feature, but not mandatory) recycle:keeptree = yes recycle:touch = yes recycle:versions = y recycle:maxsize = 0 #======================= Share Definitions ======================= #I've commented evverything here except the one I've created myself, #since I didn't want any of those default share definitions. #Shared network drive [share] comment = Pi shared folder path = /sharing available = yes browseable = yes writeable = Yes only guest = no create mask = 0777 directory mask = 0777 force group = sambashareHelpful Link regarding Server Types Documenation of samba configuration fileCLI:$ cd / $ mkdir sharingTo show information regarding the created directory and it's permissions you can use the $ stat directory/file commandBelow are commands for controlling the autofs and samba services. You'll need to restart these whenever you make configuration changes and want them to be applied$ service autofs {start|forcestart|stop|restart|forcerestart|reload|force-reload|status} $ /etc/init.d/samba {start|stop|reload|restart|force-reload|status}After creating the directory, start nautilus by typing $ nautilus and navigate to the directory and edit the Permissions and Share options as mentioned above (the checkboxes are very self-explanatory). I've enabled write-read for local and windows users.
I have a Raspberry Pi with samba installed. I've looked into autofs and saw the potential of automounting an external harddrive upon accessing it over the network by my windows machine. Appearantly the provided auto.smb configuration is intended for a samba client application, but my intention is the other way around. I want the Server to automount my harddrive whenever I access it over the network and automatically unmount it after 5mins or so. Plus the fstype should be set to NTFS. From my current understanding of autofs what I need to do is create a configuration file, let's name it auto.ntfs: contents of auto.ntfs: driveA -uuid="UUID of my drive",fstype=ntfs,verbose=1 :/dev/sda1 Then I need to add that configuration into the auto.master like so PATH MAP -options To specify, my PATH is /share so I would add /share /etc/auto.ntfs -t=60 into /etc/auto.master to successfully automount my external harddrive into that directory every time I access it over the network. Did I understand the way this works correctly and what should I do about the configuration file? Are there any things I need to consider doing this? Is it possible? I'd like to have the possibility on this answered. (no I don't want other solutions than samba and yes it has to be NTFS)UPDATE I've added the configuration file. My problem now is that the contents of the drive are not shown as I'm trying to locally access the drive for it to be automounted just to test the feature itself. auto.master: /share /etc/auto.ntfs -t=60 auto.ntfs: /share -uuid=E820DC6120DC3870,fstype=ntfs :/dev/sda1This doesn't work. When I go into the /share directory I can't see the contents of the drive. Here's an output of $ service autofs status: Jan 15 13:57:04 raspberrypi automount[529]: key ":" not found in map source(s). Jan 15 13:57:04 raspberrypi automount[529]: failed to mount /share/: Jan 15 13:57:04 raspberrypi automount[529]: re-reading map for /shareFIX for above For people that are interested in this question in the future, the above got fixed by checking the dmesg related messages, which pointed me at the actual cause of the problem instead of just saying that it doesn't work. This command can be helpful to find it out: $ dmesg -w | grep ntfs (you can grep for other message types if that's different for you) The issue was that the option -uuid was not supported. My final configuration now looks like this:auto.master: /- /etc/auto.ntfs -t=60 auto.ntfs: /sharing -fstype=ntfs :/dev/sda1After all this bugfixing, it comes to the final topic at hand: Samba Currently my problem is that whenever the drive gets mounted it changes the permissions inappropriately. I've created the shared directory using nautilus-share, since I can simply check the appropriate options there. Here is a snippet of $ stat sharing/ when autofs is disabled:Access: (0777/drwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)Here is a snippet of same command when autofs is enabled:Access: (0500/dr-x------) Uid: ( 0/ root) Gid: ( 0/ root)The access is edited upon mount, according to this dmesg message:ntfs: (device sda1): load_system_files(): Volume is dirty. Mounting read-only. Run chkdsk and mount in Windows. ntfs: (device sda1): load_system_files(): $LogFile is not clean. Will not be able to remount read-write. Mount in Windows.I don't know what to do now. Where did I go wrong? I'm thinking that I might need to configure the permissions in the configuration file of autofs, but I'm unsure due to the message above. I would be open on suggestions of changing the partition format to something more appropriate if ntfs is NOT suggested to be used as a shared mount!
How do I automount a hard drive to samba using autofs?