output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
You can compare directories with diff: $diff -qr dirA dirB Only in dirB: file.txt Files dirA/README and dirB/README differ
I have two directories with same files but with some difference in their contents. I want to list down those files which are differing. For example There are two folders Folder1 and Folder2 with files file1,file2,file3,etc. file2 of Folder1 is not similar to file2 of Folder2. So my command should list file2. How can I do that ?
Difference between files in directory [duplicate]
This is literally a version control system that you're building there :) Luckily, that is a problem that's already been solved. One of such tools is git (it's quite certainly also the most proliferate tool in that discipline). I don't know obsidian, but assuming it doesn't even already come with some integration for git:At the very beginning, and only once, you go into the directory containing your .md files and initialize the current directory as git repository, using git init . 0.1. The InternetTM says Obsidian puts its configuration in that same directory in a subdirectory called .obsidian. We want to ignore that, so we put a text file called .gitignore next to it, with only one line of content:.obsidian/We bring any new files (and at this point, all your markdown files are new) to git's attention: git add file1.md file2.md … (Or just git add *.md) we say, OK, note the current state of these added files in the index: git commit -m "This is the initial state" Congratulations! We just made our first git commit! Run git log -p to see what changes that involved. Since we went from "nothing" to "files with content", we see a log with a patch set that starts with all + for added lines. Now we change some file. Say, we change the third line in file3.md. We save, and git commit -m "End of day commit" file3.md. (If we instead made a new file, we need to git add filename.md that first!). Check git log -p's output: You will see something like@ 3,3 - old line content + new line contentMore complex changes work the same – you get a diff representation highlighting what has changed between commits. Git can do much, much more, but explaining that would leave the scope of this question. There's many many many tools to make dealing with git logs more graphical or generally prettier. I use tig a lot, for example. So, instead of your 3am rsync job, an appropriate git add *.md; git commit -sm "Automatic end of day commit" would make tracking your changes trivial. For backup purposes, you just git push (again, this leads too far, I'm just mentioning it because you said you were copying things via rsync, and I was assuming that was for backup purposes).
Background: I use Obsidian to organize my notes as I study. It's an application which works "on top of" a collection of markdown formatted plain text files and then shows the links and connections between them graphically. The text files I stored within sub folders of a single folder What I want: Since I am studying for an upcoming exam, I add information to several of these text files each day. At the end of the day, I'd like to have an automatically generated single text file containing all the changes I made to each of my files. A kind of "daily digest" which I can then read through to revise everything I learned on that day. Additional background: I already have a cron job set up to use rsync to make an uncompressed copy of my notes folder every day at 3:00 AM. I assume this copy can act as a reference when I want to see what modifications have been made at the end of the current day. An example Let's say on the 15th of May, I made the following changes Added "This is some new text" to file A Added - "this is some more textual information" to file B. My "daily digest" at the end of the 15th of May would read "This is some new text" This is some more textual information"My Research: I understand that tools like diff and meld (GUI) are great for comparing files. The diff output formatting it a quite difficult to understand. Meld on the other hand is very easy to use but it's a GUI app so I can't figure out how to direct it's output to a single text file.
How to generate a list of daily changes to my notes?
The numbers you use seems to be very big for bash. You can try something like: #!/bin/bash SIZE=$(redis-cli info | awk -F':' '$1=="used_memory" {print int($2/1000)}') MAX=19000000 if [ "$SIZE" -gt "$MAX" ]; then echo 123 fi
Trying to use this #!/bin/bashSIZE=$(redis-cli info | grep used_memory: | awk -F':' '{print $2}') MAX=19000000000if [ "$SIZE" -gt "$MAX" ]; then echo 123 fiBut always getting: "Ganzzahliger Ausdruck erwartet" When I echo SIZE I get a value like 2384934 - I dont have to / can convert a value or can / do I? OUTPUT of redis-cli info: # Memory used_memory:812136 used_memory_human:793.10K used_memory_rss:6893568 used_memory_rss_human:6.57M used_memory_peak:329911472 used_memory_peak_human:314.63M total_system_memory:16760336384 total_system_memory_human:15.61G used_memory_lua:37888 used_memory_lua_human:37.00K maxmemory:0 maxmemory_human:0B maxmemory_policy:noeviction mem_fragmentation_ratio:8.49 mem_allocator:jemalloc-3.6.0EDIT: I found the mistake - I used print in the awk command - without it it works. SIZE=$(redis-cli info | grep used_memory: | awk -F':' '{$2}')
Error when compare big numbers
The collision probability of md5sum is 1 in 264. Refer this post on crypto.se for more details.SideNote: The contents of the file is hashed, filename doesn't play any role in hashing. Are you sure the files are differnent and not just the names? $ md5 /tmp/files.txt* MD5 (/tmp/files.txt) = 29fbedcb8a908b34ebfa7e48394999d2 MD5 (/tmp/files.txt.clone) = 29fbedcb8a908b34ebfa7e48394999d2
Doing here some file listing with find command as follows: find /dir1/ -type f -exec md5sum {} + | sort -k 2 > dir1.txt Then: find /dir2/ -type f -exec md5sum {} + | sort -k 2 > dir2.txt Noticed that were some equal hashes despite being different files, for example, an xxxxxxxx.jpg image file with same hash of an yyyyyyyy.mp3 sound file. Main question is, which is the confiability level os a md5sum file comparison?
Acuracy Level of md5sum Comparison
Do you have documentation about the format? Write a small program to convert original format into a PCM wav + metadata +3d data, and back. If it's a superposition of signals on different frequencies, it should compress well with lossless audio compression algorithms, like FLAC. FLAC is like MP3: It compresses audio data by rewriting it into a different format. So it's not what I would call a "wrapper" (I wouldn't call any compression program a "wrapper"). But unlike MP3, the compressions is lossless: When you decompress, you get exactly the same data, just like in bzip2 etc. Compression ratio for FLAC on audio data is about 50%. There are various ways to store the metadata in the compressed audio, depending on the container format. It's also possible to just put all three files in an archive file, format of your chouce, though the compressed PCM data won't be compressed further.
I'm trying to compress some raw sensor data from older recordings that I need and don't want to remove from my Centos server. The data recorded is in a proprietary format, but for all intents and purposes we can characterize it as 306 channel PCM 32-bit audio recordings at 1000 Hz. With a few hundred lines of clear text meta data in the header of the file. Files range from 100mb to 1.9GB in directories around 7GB, processed file directories can be up to 60GB containing copies of raw data with filters applied to data. Here is the weird bit. I can compress the raw data down to 30% of original size with bzip2 and 26% of original with pxz. Similar results with lzrip using ZPAQ. But processed data PCM 32bit variable I can only shave 10 to 12 percent off. 16 bit short processed data, I can compress it to about 50% of original size. Processing data simplifies the recording data and there is less variation in the recording. Any suggestions? Any one have something similar? I want as much space saving as possible on the processed data, and plan to check the data after to make sure it has no errors. Any idea why simplified data out from processing is less compressible than raw? //edit - looking at FLAC but converting back to original format may be problematic. Not impossible .. Still looking. //solution edit. FLAC did not like 370 channels of data. But I was able to gain some compression by creating my own large dictionary and training it with segments of the large files.
Weird compression question
If the values fit in memory, which your 'up to 500' should, and depending on exactly what you want, awk can probably do it in one pass and (at least mostly) in one process. To list any and all value(s) that occur once, in arbitrary order if more than one: awk '!n[$2++]{a[$2]=$1} END{for(v in n)if(n[v]==1)print a[v],v}' # can pipe output to a suitable sort if you want a specific order # or for GNU awk 4, you can get several non-arbitrary orders # (less than GNU sort) using PROCINFO["sorted_in"] see the manualTo list one value that occurs once, if there is at least one: awk '!n[$2++]{a[$2]=$1} END{for(v in n)if(n[v]==1){print a[v],v;exit}}'To list one value that occurs once and otherwise indicate there are none: awk '!n[$2++]{a[$2]=$1} END{for(v in n)if(n[v]==1){print a[v],v;exit};print "no unique entry"}'
I have a text file with 2 columns first column is a name, second colomn is an time value. like: cat 34M dog 34M fish 12M ant 34MI need to compare the second colomn for the same values, and if 1 is different that the rest I want to flag the entry. So in this case, fish should be flagged because it is different from the rest. The file is dynamic and changes by a for loop on a folder bases, so My script should run the compare in a for loop. for FOLDER in is find ${DIR}/ -maxdepth 1 -type f -name values.txt`; do <something to flag the 'odd' value> doneI guess I should sort on second colomn first and then take the first (highest) value as lead marker to compare to. The file could contain more then 1 'odd' values. The file can contain between 2 and 500 entries. I could do something with a sub script or with awk, but I have no clue where to start. Thanks for helping me.
quickly compare values in a text file
$ cat tst.awk BEGIN { wid = 30 } sub(/^>/,"") { hdr=$1; next } NR == FNR { a[hdr]=$0; next } { for ( hdrA in a ) { strA = a[hdrA] lgthA = length(strA) for ( idxA=1; idxA<=(lgthA - wid + 1); idxA++ ) { substrA = substr(strA,idxA,wid) if ( index($0, substrA) ) { printf "[%s,%s]\n", hdrA, hdr break } } } }$ awk -f tst.awk file1 file2 [1,1] [1,2]
I have two files of each with 3 sequences (200 length each) with a header like this: #File1>1 TGATTGCATAACCACTTAACATCTTGTTTTATCTAAATAAAATTAAGCATGTTATCTTTTTGGGGCACTCCTGGGGCAGTAGATGCCAGTTGTTGATTCAGTATATCTACTTGTGACTGGTTATTATCCCGATTTTTTTAGTTTTAAGGTGTTGACATAGCCATCCATGCTCCATATACTGTATAGACCATCTGAGCGTT >2 TGGGAAAACAGCATTCAGCGGTGGCTTATTCCTGCTAAGGATGTTGGCCGCATTCATGCTGAGCACAACCTCGACGGCCTGCTGAGGGGCGATTCGGCATCCCGCGCTGCCTTTATGAAGGCAATGGGAGAGGCAGGGCTACGCACCATCAACGAGATGCGACGAACGGACAACCTCCCGCCATTGCCGGGTGGCGATGT >3 GAAATGGGAACCGCGAACATGCCTGCACATCCGTTTGTGCGACCCGCTTACGATACTCGCGAGGAAGAGGCCGCCAGCGTCGCCATTGCCAGGATGAATCAGGCTATTGATGAGGTATTGAGCAAGTGAATGAAGATAATATCTACGCCTTGCTTTCTCCCCTGGCAGAAGGACGGGTATATCCCTATGTTGCGCCATTA#File2>1 TGATTGCATAACCACTTAACATCTTGTTTTATCTAAATAAAATTAAGCATGTTATCTTTTTGGGGCACTCCTGGGGCAGTAGATGCCAGTTGTTGATTCAGTATATCTACTTGTGACTGGTTATTATCCCGATTTTTTTAGTTTTAAGGTGTTGACATAGCCATCCATGCGGGAAGGTGCAGCATAATGTGCTTTGGATT >2 TGAGTGCCCCATTTGTGAAGCAATAAAGTTCGGGTTCGCGCCAGCGGCAAGCGCCCAGCATGCACCGATTTTTTTAGTTTTAAGGTGTTGACATTAGGTATGTCGGGACTGGTATGCTTTCCTGTGTCGCAGCCCGGCGCGTCTCAATGCAGATTCCCATATCCTGTTCATCCATATACTGTATAGACCATCTGAGCGTT >3 TACCTGAGCGATCGGTAATTTGCGGATTGAAGACAAAGGTGCAGGAATGAGTTTTTGTACGACCGTATTCGCGCAGCTTTACTTCAATTTTGTGCTGTTTGCTCAGCTTCGTGAAAGAGGCCTGACTTTTTAAAGCATCAATTGCTGGCTGCACAAGATGTATCACCCTGTCGGTTCCTGCCTGGGTTTTCGGCAGGGTG I would like to compare each sequence from file1 vs file2 (without considering the headers) (File1: 1,2,3 vs File2: 1,2,3) and if exactly 30 continuous characters were identical from both sequences, I would like to save the headers of the sequences having a match (only those with a match) in an outputfile. For example the 30 characters string: TGATTGCATAACCACTTAACATCTTGTTTT is present in seq1 from file1 and seq1 from file2. TCCATATACTGTATAGACCATCTGAGCGTT is present in seq1 from file1 and seq2 from file2. so I will end up with an output file like: [1,1] [1,2] ...
Comparison of N identical continuous characters from a set of two files with sequences
To use the framebuffer as console you need the fbdev module. You may have to recompile your kernel. You may also be interested in the DirectFB project, which is a library that makes using the framebuffer easier. There are also applications and GUI environments written for it already.
So I have a Palm Pre (original P100EWW) model that I enabled developer mode on, and installed a Debian Squeeze chroot. Works great. I have plans to use this for ANYTHING (bittorrent peer, web server) but a phone. I noticed if I do a cat /dev/urandom > /dev/fb0 it actually writes random pixels to the screen until a No space left on device error is generated. Awesome, now I can use the display. So what kind of utilites are there that will either A) let me use /dev/fb0 as a console I can output text to or B) render text on /dev/fb0 from the command line. I don't know about recompiling the kernel for this yet (I'd love to eventually strip WebOS off entirely and turn this into a minimal ARM server) so userspace tools if they exist is what I'm asking about. Also would prefer to render directly to /dev/fb0 and not use X.
How to use /dev/fb0 as a console from userspace, or output text to it
yes, outside X-server, in tty, try command: cat /dev/urandom >/dev/fb0if colourfull pixels fills the screen, then your setup is ok, and you can try playing with this small script: #!/usr/bin/env bashfbdev=/dev/fb0 ; width=1280 ; bpp=4 color="\x00\x00\xFF\x00" #red coloredfunction pixel() { xx=$1 ; yy=$2 printf "$color" | dd bs=$bpp seek=$(($yy * $width + $xx)) \ of=$fbdev &>/dev/null } x=0 ; y=0 ; clear for i in {1..500}; do pixel $((x++)) $((y++)) donewhere function 'pixel' should be an answer... write a pixel to screen by changing byte values (blue-green-red-alpha) on x-y offset of device /dev/fbX which is frame buffer for the video-card. or try one liner pixel draw (yellow on x:y=200:100, if width is 1024): printf "\x00\xFF\xFF\x00" | dd bs=4 seek=$((100 * 1024 + 200)) >/dev/fb0UPDATE: this code works even inside X-server, if we just configure X to use frame buffer. by specifying fb0 inside /usr/share/X11/xorg.conf.d/99-fbdev.conf
I am not sure if it is the only possible way, but I read that in order to put a single pixel onto the screen at a location of your choice one has to write something into a place called framebuffer. So I became curious, if it is possible to enter into this place and write something into it in order to display a single pixel somewhere on the screen.
Is it possible to access to the framebuffer in order to put a pixel on the screen from the command line?
I can address your question, having previously worked with the Linux FB. How Linux Does Its FB.First you need to have FrameBuffer support in your kernel, corresponding to your hardware. Most modern distributions have support via kernel modules. It does not matter if your distro comes preconfigured with a boot logo, I don't use one and have FB support.It does not matter if you have a dedicated graphics card, integrated will work as long as the Hardware Framebuffer is supported.You don't need X, which is the the most enticing aspect of having the FrameBuffer. Some people don't know better, so they advocated some form of X to workaround their misunderstandings.You don't need to work with the FB directly, which many people incorrectly assume. A very awesome library for developing with FrameBuffer is DirectFB it even has some basic acceleration support. I always suggest at least checking it out, if you are starting a full-featured FB based project (Web Browser, Game, GUI ...)Specific To Your HardwareUse the Vesa Generic FrameBuffer, its modules is called vesafb. You can load it, if you have it available, with the commands modprobe vesafb. many distributions preconfigure it disabled, you can check in /etc/modprobe.d/. blacklist vesafb might need to be commented out with a #, in a blacklist-framebuffer.conf or other blacklist file.The Best option, is a Hardware specific KMS driver. The main one for Intel is Intel GMA, not sure what its modules are named. You will need to read up about it from your distro documents. This is the best performing FB option, I personally would always go KMS first if possible.Use the Legacy Hardware specific FB Drivers, Not recommended as they are sometimes buggy. I would avoid this option, unless last-resort necessary.I believe this covers all your questions, and should provide the information to get that /dev/fb0 device available. Anything more specific would need distribution details, and if you are somewhat experienced, RTFM should be all you need. (after reading this). I hope I have helped, Your lucky your asking about one of my topics! This is a neglected subject on UNIX-SE, as not everybody (knowingly) uses the Linux FrameBuffer. NOTE: UvesaFB Or VesaFB? You may have read people use uvesafb over vesafb, as it had better performance. This WAS generally true, but not in a modern distro with modern Hardware. If your Graphics Hardware supports protected mode VESA (VESA >= 2.0 ), and you have a somewhat recent kernel vesafb is now a better choice.
I'm trying to make a PCMCIA tuner card work in my headless home server, running Debian Squeeze. Now, as I have very big troubles finding the correct command line to capture, transcode end stream the video to the network using VLC, I decided to go step by step, and work first on local output. That's where the problem comes in: there seems to be no framebuffer device (/dev/fb0) to access for displaying graphics on the attached screen! And indeed I noticed I don't have the Linux penguin image at boot (didn't pay attention before as screen is attached, but always off, and anyway computer is always on). As I'm not very familiar with Linux graphics, I would like to understand:Is this related to my particular hardware (see below)? Or is it specific to Debian Squeeze/ a kernel version/... ? Is there some driver I need to manually install/load?Now some general information:The computer has no dedicated graphic card, but an embedded graphic chipset (Intel G31 Express), embedded on the motherboard (Gigabyte G31M-ES2L) I don't want to install a full featured X server, just have a framebuffer device for this particular testAny ideas/comments on the issue?
No framebuffer device: how to enable it?
Since nobody's answered yet, and after tedious hours of googling and testing, I got some grasp of the subject, I'm going to answer it... Since framebuffer device interface is a quite general one, there could be more fb devices in principle. However, as the VESA driver I used provides a direct connection between a certain hardware device and the framebuffer device file, it doesn't make sense to have more of them than one has real devices. There's a driver for virtual framebuffer devices, vfb. (Note: different from xvfb which is a virtual framebuffer for X) I haven't tested this myself, but one could have as many fb devices as one wants using the virtual device. I also think that nothing in principle prevents one from piping a virtual device to hardware framebuffer device, allowing to build a framebuffer multiplexer About the connection between framebuffers and tty's: there is none. The framebuffer is simply drawn to the screen, disregarding anything. What got me originally confused is the behavior of fbi image viewer. It turns out that it cleverly checks whether the tty it's running in is open or not, and draws to the framebuffer or not according to that. (That's why it refuses to run over SSH, unlike mplayer – it doesn't accept a pseudo terminal.) But the multiplexer-like functionality has got NOTHING to do with the framebuffer itself. If there's multiple processes writing to framebuffer, they do not block each other. It turns out that my earlier problems (crashes and such) using multiple fb programs simultaneously were not even about the framebuffer at all. Take fbterm terminal and run mplayer from it: no problem. The fbterm and fbcon terminals and the fbi image viewer draw to buffer only when something is updated, so the mplayer dominates the screen virtually 100% of the time. But if you try to run two mplayers, you are going to get a view that flickers showing frames of the one and the other, as they try to draw to the buffer having a race condition. Some useful links: http://moi.vonos.net/linux/framebuffer-drivers/ https://www.kernel.org/doc/Documentation/fb/framebuffer.txt
I'm running a Ubuntu 12.04 LTS as a home NAS server, without X. Recently I got into tuning it to serve as a video playing media device too. It might've been easier at this point to install X, but I decided to try mplayer with framebuffer playback. It worked, and everything was fine and good. However, for curiosity and maybe for practical consequences too, I can't stop thinking about framebuffers. There seems to be only one framebuffer device, /dev/fb0. (Btw. I'm using vesafs driver) If I run multiple programs that use framebuffers, chaos ensues. For example, running mplayer from fbterm just crashes it. Curiously, fbi image viewer manages to view images somehow. Obviously the programs can't share the device, there's no windowing system after all. So, is the number of (vesa) fb devices limited to hardware display devices? Or could there be more in principle, like there are multiple ttys? Would adding some more help running simultaneously software that uses them? How could I add more? Also the logic how the framebuffers are connected to ttys isn't quite clear to me... for example, mplayer shows it's video frame on every tty, but fbi doesn't. Furthermore, Ubuntu default console (fbcon?) shows behind the video overlay, which srikes me odd. What is this all about?
How can I add an additional framebuffer device in Linux?
Figured this out. You may need to add video=efifb to ensure that the framebuffer console is used: GRUB_CMDLINE_LINUX="video=efifb fbcon=rotate:1"EDIT: The efifb driver is designed for EFI firmware only, especially Intel-based Apple computers. However, as I've found out, it also works for non-Apple PCs. I am running the proprietary nVidia drivers on my Linux system, and the efifb driver works quite well. I assume it works for me because I am using nVidia drivers, and the "native" fbdev driver conflicts with them. To be honest, I don't fully understand why the efifb driver makes things work, but if someone else does (or if you can get things working with another framebuffer driver with nVidia drivers installed), please comment below. Thanks!
I want to rotate my console (not X Server) by 90 degrees (clockwise). The following seems to work for me: echo 1 > /sys/class/graphics/fbcon/rotate; however, I'd prefer to use a kernel option in Grub, rather than including the above in the /etc/rc.local script. The fbcon documentation outlines the following option that can be passed to the kernel: fbcon=rotate:<n>. Unfortunately, when I modify /etc/default/grub and modify the GRUB_CMDLINE_LINUX line like this: GRUB_CMDLINE_LINUX="fbcon=rotate_all:1"... it doesn't work. I also ran update-grub before rebooting. I've also tried this: GRUB_CMDLINE_LINUX="fbconsole=rotate_all:1"Still nothing. Any thoughts?
Rotate console on startup (Debian)
Programmatically, to retrieve information about a framebuffer you should use the FBIOGET_FSCREENINFO and FBIOGET_VSCREENINFO ioctls: #include <fcntl.h> #include <linux/fb.h> #include <stdio.h> #include <stdlib.h> #include <sys/ioctl.h>int main(int argc, char **argv) { struct fb_fix_screeninfo fix; struct fb_var_screeninfo var; int fb = open("/dev/fb0", O_RDWR); if (fb < 0) { perror("Opening fb0"); exit(1); } if (ioctl(fb, FBIOGET_FSCREENINFO, &fix) != 0) { perror("FSCREENINFO"); exit(1); } if (ioctl(fb, FBIOGET_VSCREENINFO, &var) != 0) { perror("VSCREENINFO"); exit(1); } printf("Line length: %ld\n", fix.line_length); printf("Visible resolution: %ldx%ld\n", var.xres, var.yres); printf("Virtual resolution: %ldx%ld\n", var.xres_virtual, var.yres_virtual); }line_length gives you the line stride.
Goal: I'm writing a very simple image viewer for framebuffer /dev/fb0 (something like fbi). Current state:My software takes the pixel resolution from /sys/class/graphics/fb0/virtual_size (such as 1920,1080). And then (for each row) it will write 4 bytes (BGRA) for each 1920 row-pixels (total 4x1920=7680 bytes) to /dev/fb0. This works perfectly fine on my one laptop with a 1920x1080 resolution. More precisely: setting a pixel at y-row x-col => arr[y * 1920 * 4 + x * 4 + channel] where the value channel is 0,1,2,3 (for B, G, R, and A, respectively).Problem: When I try the same software on my old laptop with (/sys/.../virtual_size -> 1366,768 resolution), the image is not shown correctly (bit skewed). So I played around the pixel-width value and found out the value was 1376 (not 1366). Questions:Where do these 10 extra bytes come from? And, how can I get this value of 10 extra bytes on different machines (automatically, not manually tuning it)? Why do some machines need these extra 10 bytes, when some machines don't need them?
How can I get the number bytes to write per row for FrameBuffer?
As of 2017, qemu doesn't provide text-mode-only graphics card emulation for x86-64 that would force a guest to stay in text mode. Current distributions like Fedora 25 come with the bochs_drm kernel module that enables a frame buffer (e.g. 1024x768 graphics mode), by default. In contrast to that, e.g. Debian 8 (stable) doesn't provide this module and thus it stays in old-school text-mode during the complete boot. Thus, when running qemu from a terminal (e.g. with -display curses) it makes sense to enable a serial console as fail safe: console=tty0 console=ttyS0or console=tty0 console=ttyS0,115200(Kernel parameters for the guest, default speed is 9600, both settings works with qemu, make the settings persistent in Fedora via assigning them to GRUB_CMDLINE_LINUX in /etc/sysconfig/grub and executing grub2-mkconfig -o /etc/grub2.cfg or grub2-mkconfig -o /etc/grub2-efi.cfg) In case nothing else works one can then switch inside qemu via Alt+3 to the serial console, then. A second measure is to disable the framebuffer via a bochs_drm module parameter - i.e. via setting it on the guest kernel command line: bochs_drm.fbdev=offBlacklist Alternative Alternatively, the bochs_drm module can be blacklisted - i.e. via creating a config under /etc/modprobe.d - say - bochs.conf: blacklist bochs_drmSince the initramfs mustn't load the bochs_drm module, as well, one has to make sure that this config is included into the initramfs. On Fedora like distributions this is achieved via: # dracut -fUEFI Boot When booting qemu with an UEFI firmware (e.g. -bios /usr/share/edk2/ovmf/OVMF_CODE.fd) the disabling of the bochs fbdev isn't enough. The Fedora boot then hangs while trying to switch to the bochs framebuffer. Blacklisting the bochs_drm fixes this but it isn't sufficient. One just gets a 640 x 480 graphics mode that isn't reset to text mode by the kernel. Thus, for UEFI guests one has to take the serial console route. Serial Console Using the serial console in combination with -display curses yields a suboptimal user experience as the curses interferes with the vt110/vt220 terminal emulation. Thus, it only suffices for emergencies. A better solution is to completely switch the display off and use the combined serial/monitor Qemu mode: -display none -serial mon:stdio -echr 2(where Ctrl+b h displays a help and Ctrl+b c switches between the modes) With Fedora 27, the Grub2 is configured with serial console support, by default. Thus, it can be controlled via the serial terminal, as well. Calling resize after login updates the terminal geometry, thus, the resulting terminal behaves as good as a local one. Multi-User Target In case the guest image has a graphical login manager installer it makes sense to disable it: # systemctl set-default multi-user.targetOtherwise, on has to switch to the first virtual console after each boot (e.g. Alt+2 or Alt+3 when using the curses display).
The QEMU options -display curses and -nographic -device sga (the serial graphics adapter) are very convenient for running QEMU outside of a graphical environment. (think: remote ssh connection, rescue system etc.) Both modes fail to work with framebuffer text mode, though. The new default with some Linux distirbutions (e.g. Fedora 25) seems to be that at some point during boot a framebuffer text mode seems to be activated such that with -display curses QEMU just displays '1024x768 Graphic mode'. With SGA just nothing is printed. Thus my question: how can I force the kernel (and the rest of startup) to just use the old-school initial text mode? Addendum Adding the nomodeset kernel parameter (and removing the rhgb one) doesn't make a difference. Most convenient would be some QEMU configuration that forces the kernel to just detect the most basic text mode - since the guest wouldn't have to be modified. Setting up a serial console (via e.g. adding the console=ttyS0 kernel parameter to the guest) works in my environment, but I observed some escape sequence issues with the Gnome terminal. Also this doesn't help with boot loaders that already use the framebuffer (e.g. the one on the Fedora 25 server ISO) - and needs a modification of the guest. Fedora Guest Example With Fedora 25 as guest, the switch to the framebuffer happens during initrd runtime, some log messges (from the serial console): [ 1.485115] Console: switching to colour frame buffer device 128x48 [ 1.493184] bochs-drm 0000:00:02.0: fb0: bochsdrmfb frame buffer device [ 1.502492] [drm] Initialized bochs-drm 1.0.0 20130925 for 0000:00:02.0 on minor 0These messages also show up with the nofb and vga=normal (guest) kernel parameters.
Disable framebuffer in QEMU guests
The standard spelling is “framebuffer”, without space. In Linux kernel, fbdev is an (optional) graphic abstraction layer for video hardware (a.k.a. video card). Different video hardware needs different drivers (that may be loaded as kernel modules), but user-space software, such as mplayer, uses unified API writing to it. The word framebuffer itself means a part of video memory where a video frame is stored. Yes, it is configurable. First, you can choose which driver to load (or build into the kernel). Second, there is fbset(8) that changes modes and other settings, as well as some higher-level utilities. Limitations? When you use a framebuffer driver, you can’t enjoy hardware (e.g. VGA-compatible) text mode and suffer some overhead – that’s the most serious one Ī̲ know. See http://tldp.org/HOWTO/Framebuffer-HOWTO/ for more details. There is a plenty of video output drivers for mplayer (besides framebuffer and X11), but Ī̲ don’t know which are better and in which sense.
I was reading the "mplayer" man pages trying to play video using just the console, (I don't have or don't want to install X11). It mentions that I can use the kernel frame buffer device which is fbdev2. It works, but I don't know much about what the "kernel frame buffer" is. Can I configure it? Are there limitations to its use? Does it use the video card to render graphics? Also, (I have integrated Intel graphics card on my laptop), are there alternatives or better solutions to play video from console that is not the kernel frame buffer?
What is the kernel frame buffer?
lsof doesn't show anything with /dev/fb0 open.It won't. There's a terminal emulator program built into the Linux kernel. It doesn't manifest as a running process with open file handles. It's layered on top of the framebuffer and the input event subsystem, which it uses internal kernel interfaces to access. It presents itself to application-mode systems as a series of kernel virtual terminal devices, /dev/tty1 and so forth; a pseudo-file under /sys that shows the active KVT number; and a series of CGA-style video buffer devices, /dev/vcsa1 and so forth. One of those application-mode systems is of course the getty+login system, which can be configured to operate on these kernel virtual terminals, and (as you have found) is by default. You can easily rid yourself of the getty processes using documented systemd mechanisms. In an old System 5 init system, each getty would be a record in /etc/inittab. In an BSD init system, each getty is a record in /etc/ttys. In a systemd system, things are a little indirect.The "login" dæmon, logind, knows about things called "seats" in systemd slang. "Seat" zero is the one with the primary framebuffer and all of those kernel virtual terminals. For that seat, logind attempts to start N systemd services, named [emailprotected] through to autovt@ttyN.service. The value of N is set in the NAutoVTs setting in /etc/systemd/logind.conf. These systemd services are created from a service template unit, named [emailprotected]. The template parameter is, as above, the device name of the kernel virtual terminal's device file, in /dev/. [emailprotected] is, in the default configuration, a symbolic link to [emailprotected]. It is [emailprotected] that describes running a getty program, set to do its input/output via the kernel virtual terminal device file.So to stop any of this, visit /etc/systemd/logind.conf and configure logind not to auto-start any autovt services (and not to reserve any virtual terminals, if you want to be thorough about it). However, that is not the whole of it. The terminal emulator program is still active in the kernel, and everything from log messages directed to a kernel VT through to the regular flashing of the cursor will cause the terminal emulator to interfere with your use of the framebuffer. But that's a matter for coding the program that you have that uses the framebuffer to negotiate with the kernel terminal emulator program, which has already been answered here. The serial console login happens via a quite different route, by the way. A generator creates an instance of the [emailprotected] template unit a boot time, instantiating it once for each kernel console device that it finds, or is told about. Further readingBest practice for hiding virtual console while rendering video to framebuffer https://superuser.com/a/723442/38062 logind.conf. systemd manual pages. freedesktop.org. "multiseat" systemd. freedesktop.org. systemd-getty-generator. systemd manual pages. freedesktop.org.
I'm working with an embedded platform and need to have /dev/fb0 clear for my own use (the device accessible over serial console while the screen is used to display information, without X.). I've already changed default.target from graphical to multi-user, but now it opens getty with login prompt on the framebuffer device and I just can't locate which service is that. I don't want to disable the serial console login by chance, and lsof doesn't show anything with /dev/fb0 open. The distribution is Yocto Linux, if that's of any help.
Which systemd service starts text console on the framebuffer device?
A solution seems to be to replace xvfb with a real X11 server using the dummy driver from the package xserver-xorg-video-dummy. This askubuntu answer provides an example Xorg.conf file, but most people seem to refer to this xpra wiki on using this driver, with its example conf file.
My system: $lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.3 LTS Release: 14.04 Codename: trustyXvfb: $ dpkg -s xvfb Package: xvfb Status: install ok installed Priority: optional Section: x11 Installed-Size: 2140 Maintainer: Ubuntu X-SWAT <[emailprotected]> Architecture: amd64 Multi-Arch: foreign Source: xorg-server Version: 2:1.15.1-0ubuntu2.7 Provides: xserverCurrent problem: Xvfb do not support RANDR extension, even if I add the flag: +extension RANDRIf I run xdpyinfo, RANDR is not on the list. It's a missing feature or a bug. I found a reference here with a patch: https://bugzilla.novell.com/show_bug.cgi?id=823410 And looks like that in other distros like debian, there is already a testing build of Xvfb with support: Running Firefox in Xvfb: extension "RANDR" missing on display I am trying to run a program throught Xvfb, and it returns the following error: Xlib: extension "RANDR" missing on display ":99".The program works if I run it via ssh/command line. The problem appears to be the the lack of support for "RANDR" in xvfb. My question is: what is the easiest way to get xvfb with "RANDR" support in my system?
Extension “RANDR” missing on xvfb
AFAIK, editing /boot.cfg is the preferred method. You can even specify more human-readable modes; I am using (on -current, 7.99 in a VirtualBox VM) menu=Boot normally:rndseed /var/db/entropy-file:;vesa 1024x768x32; boot netbsdI think having this in the kernel somehow without it being compiled in would be bad - if you update your kernel you'd lose the setting. The /boot.cfg method is persistent and human-readable.
When booting NetBSD, the old Tecra 720CDT that I have, works quite nicely in 1024x768x15 mode with vesa fb. I always have to activate VESA when booting the system: > vesa on > vesa 0x116 > boot netbsdNow, I was able to somewhat automatize this process by editing /boot.cfg: menu=Boot normally:rndseed /var/db/entropy-file;vesa on;vesa 0x116;boot netbsdNo idea if this is preferable. I'd actually like to set this kind of behavior in the kernel itself. on OpenBSD, I'd simply use config to change the kernel settings. That, however, does not work on NetBSD. I'd have to recompile the kernel (that's my understanding). Now, when looking through the config file, I couldn't find things related to vesa or switching to framebuffer mode while booting. Is this even possible? If so, how do I do that?
How to enable VESA framebuffer as default in NetBSD 6.1
I swapped the cables to the video card... simple ... brilliant!
Does the kernel, framebuffer, or a framebuffer driver (uvesafb) have an option to specify the video-card output to use? The kernel only outputs to one monitor:Kernel message buffer (before and after framebuffer initialization) Framebuffer (if specified in kernel command-line) Virtual console (tty's etc)Note: I have no issues with X-windows configuration (only the console) Kernel options (with fb): kernel /stable root=/dev/sda3 video=uvesafb:mtrr:3,ywrap,1920x1200-32@60 Kernel options (no-fb): kernel /stable root=/dev/sda3 System Information: Gentoo Linux (x86_64) Kernel: linux-3.3.8-gentoo Video Card: NVIDIA GeForce GTX 560 (2x DVI outputs) Driver: NVIDIA Driver Version 302.17
Specify Monitor For Linux Console
xpra is your friend: http://xpra.org/. Install xpra on server and client. Start xpra server over ssh with xpra start-desktop ssh:user@server:XVFBDISPLAY --use-display --start-via-proxy=noAlternatively: If you are already logged in to the server, you can start xpra server with xpra start-desktop :XVFBDISPLAY --use-display --start-via-proxy=noStart xpra on client with xpra attach ssh:server:XVFBDISPLAYYou can detach and reattach later again: xpra detach ssh:server:XVFBDISPLAY(Replace XVFBDISPLAY with the display number of Xvfb.)
I am locally on a machine without root rights. X forwarding is disabled. Remotely I am running a process on a machine without a screen, using the Xvfb virtual framebuffer which simulates an X server but discards any image displayed. This works reasonably well. Now, some things are not working, and I need to debug by looking at the X screen. I did take a screenshot in Xvfb with xwd -display :99 -root -out /tmp/screenshot.xwdump but it is quite complicated to look at many of them in a sequence. Is there a way to connect with from my client to the server, and then connect to the framebuffer in order to display the remote X window locally? This could be a second ssh channel. The X program should ideally keep on running after disconnecting and I would like to be able to start it before the second connection if possible (think spice). I only have outgoing connections to the server, and only to port 22. On neither machine root rights are available. PS: This question is similar to Running programs over ssh but my requirement is that no program can be installed as root on client or server which seems to rule out xpra (the answer given there) unless I find an easy way to use it.
How to display an X11 screen from a remote machine? (Alternative to ssh -X)
Sorry for repeating but take a look to Nano-X sources. git clone git://microwindows.org/microwinIn particular take a look to the files: drivers/kbd_tty.c drivers/scr_fb.cWhat is done in the tty driver is very similar to what Xorg does, and The devfb driver is a very simple and clean implementation. Linux's devfb frame buffers mostly rely on ioctl (eg: to set/get resolution) and mmap (to raw write/read pixels). devfb is just one (easy and a bit more portable on linux) way to access the graphic hardware. Xorg drivers instead are composed by a kernel driver and a Xorg user space interface between the driver and Xorg itself, and what happen between kernel and user side is really implementation-dependant (there isn't a standard). You can also take a look to SDL or Directfb but Nano-X is the cleanest/easy and a display server itself, so probably could help you on other question that you'll surely meet.
I'm writing my own display server as an educational exercise. Where in the Linux kernel tree would I look for documentation on the console's graphical mode? Basically, as I understand it, Xorg takes over the tty device and also takes over the raw hardware. How can I find documentation on duplicating that action?
Where would I start looking for documentation on the graphical mode of the Linux console?
For Nouveau: Judging from the Forcing modes section, and the drm_fb_helper.c source linked from there, it looks like you need to write a custom driver and override the drm_fb_helper_single_add_all_connectors with your own routine to get multiple framebuffers for different outputs. Not easy to do if you are not a programmer. (BTW, a framebuffer is a piece of memory that stores the pixels you see on your monitor(s). The /dev/fb device(s) expose that piece of memory to linux programs, and the modesetting part (also kernel modesetting, KMS) instructs the hardware to display that particular framebuffer with a particular resolution and frequencies.)
I am trying to have the tty (at boot) to display on a secondary monitor than what it currently starts on, or even better - to have multiple tty's running at once on different displays, as is described here (In section C3) after trying to work this out and running:cat /proc/fband getting an output of0 EFI VGAI have to say that I have no ideas what to do, whatsoever. Should I have different monitors as different framebuffers? Is that viable? Should I have it all as one framebuffer?
How to get tty to display on another monitor (using nvidia drivers)
I know this is old question, but still valid. In order to have /dev/fb0 you need to have the frame-buffer enabled in your kernel. To check you can grep it from kernel configuration (from currently running grep CONFIG_FB_ /boot/config-3.10.0-693.17.1.el7.x86_64 For virtual environment you probably need VESA enabled so you add grep VESA and you should get the following output: CONFIG_FB_BOOT_VESA_SUPPORT=y # CONFIG_FB_UVESA is not set CONFIG_FB_VESA=yIf you have this configured you will see the device /dev/fb0 Note: For older kernels like 4.9 you may need to add vga=0x317 in the command-line.
Recently ran into a situation where I need to install headless TeamViewer on a CentOS 7 server on Linode. This requires /dev/fb0 in order to function. So far it has not been clear on how to enable the framebuffer (/dev/fb0). What do we need to do to install kernel support for a virtual machine?
Enabling /dev/fb0 on a CentOS 7 virtual machines?
O.K, here is why the problem occurs: It's because Neovim provides the guicursor option which should be set only when using a graphical terminal inside X11 that supports it. In a bare TERM=linux tty, using the option is not supported. I'd managed to isolate this problem when I launched Vim instead of Neovim. and because Vim doesn't provide this option I didn't experience it. What makes it hard to isolate is the fact that it continues even after you exit Neovim so it made me think this a general problem.
I'm trying to set up my Linux console - the bare TTY terminal without X. I tried to capture this problem with asciinema but interestingly, it didn't show up there, So I captured it with my own camera, here is a link to the video. It doesn't appear in [n]vim only, It is completely random and it appears sometimes on the command line as well. I'm pretty sure it has nothing to do with the font. Has anyone ever encountered such a strange behavior before? Edit: more info: I'm using ArchLinux and I think there was a problem with the way I installed the OS. In the past, I made a terrible mistake which deleted almost all files in /usr/. Afterwards, I decided I don't need to reinstall the filesystem For example and I only need to reinstall the gnu-core programs and the kernel with pacstrap. This problem appeared after that.Troubleshooting:I tried reset and it doesn't help. I tried LC_ALL=en_US.UTF-8 nvim test.txt and LC_ALL=C nvim test.txt in order to see if it's related to Locale settinga as well and it doesn't help either.
Linux Console prints the letter 'q' randomly
You can probably find the value in /sys/class/graphics/fb0/stride which is the length of the line in bytes according to the source. You need to divide by the bits_per_pixel divided by 8 to get the stride in pixels.
I am accessing /dev/fb0, the screen's frame buffer, in console mode, Debian 10, using a HP Envy Touchsmart laptop, using fwrite standard C function. I read this page: https://techoverflow.net/2015/06/21/querying-framebuffer-resolution-in-linux/ It states that "cat /sys/class/graphics/fb0/virtual_size" should return fb0's dimensions. It returns 1366 x 768 pixels. This is my actual screen resolution. So good so far. But when I write into /dev/fb0, I actually need to write 1376 pixels before I start a new row. Please note pixels are 32-bit packets, so it does not look like there is an underlying scanline alignment issue. We are talking about a difference of 10 times 4 bytes, i.e. 40 bytes, which is a lot. Where does this discrepancy come from? How do I get the scanline width information without having to find out visually?
How do I get fb0 scanline length?
A bit late, but fbterm can do it fbterm -s 40
I need a to set a font larger than 32x16 for my framebuffer console. As far as i know 32 is the maximum you can do. Is there a workaround? I'm fine with starting an alternate framebuffer terminal (but which one?). I can't run X and i can't lower the resolution of my display. My /etc/default/console-setup looks like this: FONTFACE="Terminus" FONTSIZE="32x16"Something like 40xSomething would be the sweet spot.
How do i set framebuffer console font > 32x16?
To stop /dev/tty1 from overwriting the buffer I'm using systemctl stop [emailprotected] and then make the cursor invisible with.. /usr/bin/tput civis > /dev/tty1 This allows me access to console after reboot should I lose access via ssh.
I'm running a Raspbian Buster server with no Xserver. I want to display wallpaper on a connected television but /dev/tty1 keeps over writing /dev/fb0 either with a blinking cursor or just refreshing randomly 60 seconds later after I make the cursor invisible (from [emailprotected]). My new strategy is to completely prevent /dev/tty1 from ever writing to the framebuffer. Thanks for any help.
How can I prevent a TTY (e.g. /dev/tty1) from writing to the framebuffer (/dev/fb0)?
I am not certain that Xvfb supports resizing. If your main interest is VNC, perhaps you should try TigerVNC. It's a modern VNC server that supports RandR and Xinerama. Screen resizing and multiple monitors work very well in TigerVNC.
I run Xvfb with command: Xvfb :1 -screen 0 100x100x16 -fbdir /tmpAnd it's working fine. I can connect via VNC, and now under /tmp directory I have Xvfb_screen0 binary file. I thought it will act like /dev/fb0 so I tried to change its settings with fbset like: sudo fbset -fb /tmp/Xvfb_screen0 -xres 500 -yres 500But the command finishes with error:ioctl FBIOGET_VSCREENINFO: Inappropriate ioctl for deviceIs there any way to change running Xvfb server resolution?
Changing Xvfb frame buffer resolution while it's running
X does not create a new screen. To use the same display and input event devices that the kernel's built-in terminal emulator is using (to present its virtual terminals) a program must arrange to share them. The kernel's terminal emulator provides an API through which such a program can negotiate when it has responsibility for input and output, and when the kernel's built-in terminal emulator has. This API is through ioctl() calls on a file descriptor that is open to a kernel virtual terminal character device. There are 64 of these devices in Linux, 16 in FreeBSD/PC-BSD. X does not create these. It opens an existing one — by convention one which no TUI programs are simultaneously trying to use as a kernel virtual terminal. In other words: By convention there's no TUI login session run on the kernel virtual terminal device that X opens and uses. A program that shares with the kernel terminal emulator must …… tell the kernel terminal emulator to stop writing into the framebuffer to display output, or the cursor. This is done with the KDSETMODE ioctl() to set the nowadays quite misnamed KD_GRAPHICS mode. When in KD_TEXT mode the kernel terminal emulator doesn't nowadays usually have anything to do with display hardware being in an actual text mode. So-called framebuffer consoles have the display hardware in graphics mode. The distinction between KD_TEXT and KD_GRAPHICS modes is that in the former mode the kernel's terminal emulator will draw character glyphs onto the framebuffer as the terminal line discipline delivers output to it, and will also draw a cursor; whereas in the latter mode it will not do any drawing at all. These would actually be better thought of as "draw graphics" and "don't draw graphics" modes nowadays, were the wrong one not named "graphics". ☺ … negotiate virtual terminal switching, if applicable. This is done with the VT_SETMODE ioctl(), with which the program can arrange to receive signals when the virtual terminal that it is using for the ioctl() calls is switched to or away from. … negotiate the handling of input with the kernel terminal emulator.On Linux one might be reading from the input event subsystem directly, in which case the program tells the kernel's terminal emulator to stop reading those same input events, which it receives copies of, to stop translating them into characters, and to stop sending them off to the line discipline as input. How this is done varies:The original way to do this was with the KDSKBMODE ioctl(), switching the virtual terminal into K_RAW mode. In this mode, the kernel terminal emulator still receives input events from the kernel's input event subsystem, but it performs no processing of them whatsoever, passing them to the line discipline as character input. However, this mechanism (which had its roots in the way that X worked before there was an input event subsystem) was broken, in that input was still being sent to the line discipline and still had to be drained. And it required that the termios input state for the terminal also be in raw mode, otherwise the raw scancodes would be misinterpreted as special characters such as the STOP or INTR characters by the line discipline. A way, once considered to be better, to do this was with the KDSKBMODE ioctl(), switching the virtual terminal into K_OFF mode. In this mode, the kernel terminal emulator not only wouldn't process the input events, it wouldn't send them along to the line discipline. However, this mechanism was broken, because it was part of an K_OFF/K_RAW/K_CODE/K_XLATE mode switch. systemd and other similar systems would manage virtual terminal modes, and end up switching virtual terminals out of K_OFF mode. The better way nowadays is to use the KDSKBMUTE flag. This turns off all input event processing without affecting, or being affected by, the K_RAW/K_CODE/K_XLATE mode switch.On FreeBSD/PC-BSD, there's no separate input event character device in the first place. One reads keyboard input through the kernel virtual terminal anyway, so whilst one might want to switch it into scancode (K_RAW) or keycode (K_CODE) modes, one does not want to switch it off.There are some interactions, here. An X server, for example, switches the virtual terminal into keycode mode, reads the keycodes and turns them into X keysyms, passing them through the X keyboard handling mechanisms. This means that the kernel's built-in terminal emulator never gets to perform the special processing for the Alt+Fn keyboard sequences. It is the X server that has to itself recognize Ctrl+Alt+Fn. Further readingArthur Taylor (2013-02-02). systemd should not call KDSKBMODE on a VT with X. systemd-devel. Adam Jackson (2012-11-16). [PATCH] vt: Drop K_OFF for VC_MUTE. Linux kernel mailing list. Adam Jackson (2012-11-16). [PATCH] linux: Prefer ioctl(KDSKBMUTE, 1) over ioctl(KDSKBMODE, K_OFF). xorg-devel. Michael K. Johnson (1994-06-01). Linux Programming Hints. Linux Journal.
I'm currently rendering video in Linux directly to the framebuffer using GStreamer. I was wondering how I would go about hiding the virtual console while rendering. I can stop the cursor from blinking, but that only works when no text changes on the console. X seems to create a new screen accessible with Ctrl(+Alt)+F7 – is it possible to do something like that myself? Somehow be able to switch between a console & the rendering screen with Ctrl+Alt+F1 and Ctrl+Alt+F2.
Best practice for hiding virtual console while rendering video to framebuffer
The general answer is: you can not. Framebuffer is a different (you can say: more "basic") way of interfacing the graphics than an X server creates. Only the apps that where designed to utilize a framebuffer are able to do it. And there aren't many graphical apps which contain such support - the framebuffer is mostly used for text-mode (console) applications. Firefox is a classic example of an app that was designed to run on top of an Xorg server (just as most of the grpahical apps). However, if you are really interested, there are some projects that use the framebuffer as base for a bit more advanced graphical apps. Probably the most advanced can be found under the DirectFB project page. This actually does contain some information about running Firefox in framebuffer mode (that is, under DirectFB environment). Notice however that it is only an experimental port of Firefox - very old and apparently abandoned around 07-2008.
If I can see a movie from the console (like in this post), then how can I use other apps like Firefox from the console? I'm looking for something that works in Ubuntu, Fedora, or OpenBSD.
How to run an app in a framebuffer?
DirectFB might be what you are looking for. If you needed higher level API, SDL should be able to use it as its backend.
I have some embedded Linux ARM chip with LCD display on frame-buffer. I connect to chip with serial console. I can access frame-buffer directly with low-level commands. However I need to draw some figures or even sprites. I am searching something. Can for example SDL run on frame-buffer without X, or there is similar graphical libraries? High performance video like speed is not needed, because probably animation will not be used, but GUI should be at least usable. Ncurses was useful for text interface, but I need some graphical interface.
What is the easiest way to draw graphics on Linux framebuffer?
Have you looked at fbsplash? It is lightweight, and doesn't require X11. Nor does it require kernel patching. There is a package in the Arch user repository that includes a script for filesystem check progress messages and other features...
I've been designing a Linux distro and trying to incorporate a nice user experience into it in the form of pleasing art and an interface that won't confuse or overwhelm first-time Linux users. The problem I'm working on now is attempting to bring up a loading screen during boot process that either has a progress indicator or a "dummy" progress bar ala Windows XP–just something that moves to ensure the user that the system hasn't forgotten about them–and that the user can escape out of by hitting a certain key. I've already created one and I'm looking for the next step in including it in the distro. I've already tried:splashy - Doesn't work with the current kernel. At all. MPlayer with -vo directfb via DirectFB - May work in the long run, but DirectFB seems to both produce a garbled image and overload the framebuffer and making the console unresponsive. Plus, it's not as modular as I'd like (how to signal that it's finished loading?).I'd rather not have to patch the kernel (like the abandoned boosplash project does), as this tends to break horribly when a new kernel version comes out. Also, from what I've seen, kernel-modifying projects tend to be difficult for developers to maintain for that reason, resulting in a high project-abandonment rate. To get to the point, my question is this: Can you recommend a good bootsplashing utility that can do what I just described? I'm using Linux 2.6.38.7 and basing the distro on Slackware 13.37.
Creating a boot splash screen
The answer is simple: it is not possible with uvesafb as it was not tailored for that purpose. Xorg uses XrandR and recent multiple monitor configurations use Kernel Mode Setting (KMS), which recent video drivers are designed to use as well. You might therefore have better chances with KMS and an Intel 945GME. I have not checked however.
I'm using uvesafb to get a simple framebuffer on a Intel 945GME embedded graphics controller. That works fine. The PC has a single combined DVI+VGA output connector and both outputs currently show the same graphics. AFAIK that controller should be able to show different screens on the two outputs (ie. dual monitor setup). How can I configure uvesafb to operate in this mode? Ideally I'd get a /dev/fb1 along with /dev/fb0, but it would be also okay if the second screen whould just show a different offset within /dev/fb0.
dual monitor with uvesafb / Intel 945GME?
Really? TWO downvotes? I DO use this script AND its been deemed "the right answer". WTF?Following up on @Thomas Dickey's answer with regards to solarized script fbterm's initc uses decimal values not hexidecimal values so you will need to rewrite most of it. Once done, it is invoked within another script (eg /etc/profile or ~/.bashrc) using: . solarized-fbterm.sh Luckily I already have done this, solarized-fbterm.sh: #!/bin/bash # # Author: [emailprotected] (Paul Wratt) # Original: [emailprotected] (Benjamin Staffin) # Set your fbterm's color palette to match the Solarized color scheme by # using escape sequences. fbterm uses decimal values not hex values. #set -o nounsetbase03="0;43;54" base02="7;54;66" base01="88;110;117" base00="101;123;131" base0="131;148;150" base1="147;161;161" base2="238;232;213" base3="253;246;227" yellow="181;137;0" orange="203;75;22" red="220;50;47" magenta="211;54;130" violet="108;113;196" blue="38;139;210" cyan="42;161;152" green="133;153;0"printf "\033[3;234;$base03}\033[3;235;$base02}\033[3;240;$base01}\033[3;241;$base00}\033[3;244;$base0}\033[3;245;$base1}\033[3;254;$base2}\033[3;230;$base3}\033[3;136;$yellow}\033[3;166;$orange}\033[3;160;$red}\033[3;125;$magenta}\033[3;61;$violet}\033[3;33;$blue}\033[3;37;$cyan}\033[3;64;$green}"function cset() { ANSI=$1 RGB=$2 printf "\033[3;%d;%s}" $ANSI "$RGB" }#black cset 0 $base02 cset 8 $base03#red cset 1 $red cset 9 $orange#green cset 2 $green cset 10 $base01#yellow cset 3 $yellow cset 11 $base00#blue cset 4 $blue cset 12 $base0#magenta cset 5 $magenta cset 13 $violet#cyan cset 6 $cyan cset 14 $base1#white cset 7 $base2 cset 15 $base3
I have fbterm installed and I'm attempting to use it with the solarized color scheme. I have not been able to find any information about this. The colors are already added to my .Xresources and working with xterm. Is there any way to use this colorscheme in the framebuffer?
Solarized colorscheme in fbterm?
I use fbi (frame buffer image) for that. Sources are also available for fbi improved
I want do display a png image on a framebuffer on an embedded Linux. I already found the manpage of png-fb: man fb-png But I could not find a source for that. Does anybody know the source for that program? Alternatively, is there another program to display a png image on a framebuffer?
Display a png image file on a framebuffer: png-fb source
The driver is a linux kernel module. Download the source of the linux kernel, have a look at the code of the existing framebuffer drivers in drivers/video/fbdev (github here) and the documentation in Documentation/fb (github). Google for tutorials how to write kernel modules, practice with a simple module first. Just mapping memory won't be enough, you'll have to implement a few ioctls. Writing kernel drivers is not easy. If you have to ask this kind of questions (and you asked a lot in the past few days), you probably won't be able to do it. X is a server for the X protocol. It can use hardware via the DRM kernel modules, and it can also use hardware via framebuffer drivers (with the fbdev X driver). Details about that are easy to find online, google. /dev/fb0 is a framebuffer device, so you don't need to concern yourself with X or DRM.
I want to write a linux driver which maps my specified memory address space to /dev/fb0. the driver should be specified by what part of linux? drm or frame buffer or server X or somthing else? Which properties should I have in my driver?
mapping linux /dev/fb0 to DDR for displaying
You can try to run directy on the inittab... try to edit the /etc/inittab and replace the 1:2345:respawn:/sbin/getty 38400 tty1with 1:2345:respawn:/usr/bin/python /srv/game/game.pyIf the game crashes, init will restart it again. The game probably needs to know that is should open tty1 (or any other at your choice) if you need the console, the other terminals should be normal, so ctrl+alt+F2 should jump to a login console If you want to try with the runlevel, you are on good track... you probably need to define a TTY (probably export TTY=/dev/tty1) so the app knows where it should connect (as inittab and rc script run without any TTY defined). As i don't know python nor framebuffer consoles, dont know how to do that in python and what else is needed (maybe a more framebuffed or python direct question on stackoverflow is needed)
I have a python application that uses pygame to access the framebuffer directly without using X. I want to start this application on startup instead of showing the console login prompt. I haven't found any good resources which explains how I would do it. Just the same way gdm is started instead of showing a console login prompt. Bonus question: What would happen if said application crashed? Would the console login prompt be shown? Edit: I have been reading up on runlevels and startup. More specific question below Will it be enough to create a /etc/init.d script which starts my python program, update rc.d with update-rc.d and setting priority to 99 so that it runs last and setting it to run under runlevel 5 (Which is for gui applications I heard). Then changing the default runlevel 5 in /etc/inittab? Or do I have to do something special since the program uses framebuffer?
How do I start a gui framebuffer (no X) application on startup instead of console login prompt?
After noticing that /dev/fb0 didn't exist despite having loaded fbcon and a framebuffer device module, I figured it out:Build i915 as a loadable module instead of built-in and make sure that legacy fbdev support is enabled. (Building it as a loadable module is perhaps not necessary, I only did it to ensure I could blacklist/unload i915, but the key is to select legacy fbdev support.) Enable framebuffer console (fbcon) and build it as a module. Ensure that tty is also enabled. Edit /src/kernel/drivers/gpu/drm/i915/i915_drv.c and remove or comment out all if loops that refer to conflicting framebuffer modules (just search for "conflict", on 4.4.250-R89 kernel source there are 3 of these loops), otherwise you might encounter an error during make. Apparently the i915 driver for ChromeOS doesn't want you to have a framebuffer console. Build kernel and modules. Install to Linux. Add fbcon to /etc/initramfs-tools/modules to load them at boot (they do not load by default). Alternatively, you can load it manually when you need to use TTY emulation. Update initramfs and grub. Reboot. You should be able to see kernel boot messages and use Ctrl+Fn to access TTYn.
I'm running Linux (Debian 10) on a Chromebook (Eve) using a stock Chrome OS kernel (4.4.x) with minor modifications. Everything runs (mostly) fine except that TTY console cannot be accessed via Ctrl+Alt+Fn, which does switch framebuffers as intended (i.e. Ctrl+F1 switches to DM and Crtl+F2 switches to desktop), but there doesn't seem to be any framebuffer for TTY emulation to display on (the screen just freezes upon Ctrl+F3 but desktop can be recovered just by Crtl+F2). Given that no log is displayed at boot, I'm guessing it's a kernel configuration issue. Relevant driver options related to TTY, console, and framebuffer are already enabled in the kernel configuration, and tty devices are listed in /dev. I tried compiling the Chrome OS kernel using stock Debian 10 configurations (for what overlap and default configuration for those that don't) and the TTY console did become available (but obviously a bunch of other things didn't work), so it isn't something that was written out of the Chrome OS kernel. I compared the stock configurations between Eve and Debian and noted where different options are chosen on the same configuration items (there are 532 of these) and noted the following differences: CONFIG_ITEM Debian Eve CONFIG_AGP_AMD64 y is not set CONFIG_AGP_SIS y is not set CONFIG_AGP_VIA y is not set CONFIG_VGA_SWITCHEROO y is not set CONFIG_DRM_FBDEV_EMULATION y is not set CONFIG_DRM_LOAD_EDID_FIRMWARE y is not set CONFIG_DRM_DP_CEC y is not set CONFIG_DRM_VGEM is not set y CONFIG_DRM_UDL is not set y CONFIG_DRM_CIRRUS_QEMU is not set m CONFIG_FIRMWARE_EDID y is not set CONFIG_FB_BOOT_VESA_SUPPORT y is not set CONFIG_FB_CFB_FILLRECT y is not set CONFIG_FB_CFB_COPYAREA y is not set CONFIG_FB_CFB_IMAGEBLIT y is not set CONFIG_FB_SYS_FILLRECT y is not set CONFIG_FB_SYS_COPYAREA y is not set CONFIG_FB_SYS_IMAGEBLIT y is not set CONFIG_FB_SYS_FOPS y is not set CONFIG_FB_TILEBLITTING y is not set CONFIG_FB_VESA y is not set CONFIG_FB_EFI y is not set CONFIG_FRAMEBUFFER_CONSOLE_ROTATION y is not setBut nothing changes after I compiled the kernel with these setting copied from Debian to the stock Eve configuration. Something else is amiss, and help is appreciated.
How can I enable TTY console?
To post an answer to my own question: The reason it wasn't working was because the fbcon module wasn't being loaded during boot, even though it had been built and installed. Running modprobe fbcon to load the module immediately made the console appear on my screen. I have added fbcon to /etc/sysconfig/modules and it's initializing properly on boot again now. It seems a little strange though, that the module was loading automatically before, without me having to do anything.
I have an Apple MacBook that is running a Linux From Scratch system that I have built. It is a minimal system, just booting into a bash prompt, with no X Window System installed. The graphics chip is an Intel GMA 950, which uses the i915 driver. Previously, I had it booting up into the framebuffer console; however, I tweaked some of the kernel configuration settings the other day and now the framebuffer console doesn't seem to load up any more (although the screen goes black and then resets during boot). Stupidly, I didn't save the kernel config file for the setup I had working, although I do have a printout of the lsmod command for that setup, which shows which kernel modules were loaded: Module Size Used by ccm 20480 6 hid_generic 16384 0 isight_firmware 16384 0 usbhid 32768 0 i915 1343488 1 i2c_algo_bit 16384 1 i915 arc4 16384 2 fbcon 49152 70 bitblit 16384 1 fbcon fbcon_rotate 16384 1 bitblit fbcon_ccw 16384 1 fbcon_rotate fbcon_ud 20480 1 fbcon_rotate fbcon_cw 16384 1 fbcon_rotate softcursor 16384 4 fbcon_ud,fbcon_cw,fbcon_ccw,bitblit drm_kms_helper 114688 1 i915 ath9k 81920 0 cfbfillrect 16384 1 drm_kms_helper ath9k_common 16384 1 ath9k syscopyarea 16384 1 drm_kms_helper cfbimgblt 16384 1 drm_kms_helper ath9k_hw 389120 2 ath9k,ath9k_common sysfillrect 16384 1 drm_kms_helper sysimgblt 16384 1 drm_kms_helper mac80211 405504 1 ath9k fb_sys_fops 16384 1 drm_kms_helper cfbcopyarea 16384 1 drm_kms_helper drm 282624 3 i915,drm_kms_helper ath 28672 3 ath9k_hw,ath9k,ath9k_common pata_acpi 16384 0 intel_agp 16384 0 coretemp 16384 0 video 36864 1 i915 uhci_hcd 40960 0 pcspkr 16384 0 backlight 16384 2 video,i915 ehci_pci 16384 0 ehci_hcd 73728 1 ehci_pci ata_piix 36864 0 rng_core 16384 0 intel_gtt 20480 2 intel_agp,i915 fb 65536 8 fbcon_ud,fbcon_cw,fbcon_ccw,bitblit,softcursor,i915,fbcon,drm_kms_helper agpgart 32768 3 intel_agp,intel_gtt,drm evdev 24576 0 fbdev 16384 2 fb,fbcon mac_hid 16384 0So, you can see that fbcon (which is the driver for the framebuffer console) was loaded. However, the output of lsmod for the newer kernel build (where the console isn't loading) is as follows: Module Size Used by hid_generic 12288 0 arc4 12288 2 i915 1314816 0 usbhid 28672 0 prime_numbers 12288 1 i915 i2c_algo_bit 12288 1 i915 drm_kms_helper 98304 1 i915 cfbfillrect 12288 1 drm_kms_helper syscopyarea 12288 1 drm_kms_helper cfbimgblt 12288 1 drm_kms_helper pata_acpi 12288 0 sysfillrect 12288 1 drm_kms_helper ath9k 73728 0 ath9k_common 12288 1 ath9k ath9k_hw 368640 2 ath9k,ath9k_common sysimgblt 12288 1 drm_kms_helper fb_sys_fops 12288 1 drm_kms_helper cfbcopyarea 12288 1 drm_kms_helper mac80211 356352 1 ath9k coretemp 12288 0 ata_piix 32768 0 ath 24576 3 ath9k_hw,ath9k,ath9k_common drm 241664 3 i915,drm_kms_helper uhci_hcd 36864 0 video 32768 1 i915 intel_agp 12288 0 pcspkr 12288 0 intel_gtt 16384 2 intel_agp,i915 fb 57344 2 i915,drm_kms_helper ehci_pci 12288 0 ehci_hcd 65536 1 ehci_pci agpgart 28672 3 intel_agp,intel_gtt,drm rng_core 12288 0 fbdev 12288 1 fb backlight 12288 2 video,i915 evdev 20480 0 mac_hid 12288 0fb, fbdev, i915, drm, intel_agp are all there, but fbcon isn't. Does anyone know of a possible reason why fbcon isn't loading up? Edit: (to answer a question in the comments) The output of grep CONFIG_FRAMEBUFFER_CONSOLE .config is: $ grep CONFIG_FRAMEBUFFER_CONSOLE .config CONFIG_FRAMEBUFFER_CONSOLE=m CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y # CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not setfbcon is configured as a module (as it seemed to be in the previous setup). I believe the second line means that it should be setting fbcon to the primary display device by default. Update: I loaded the module manually, using modprobe fbcon and it worked - all of the text appeared on the screen. I still have to figure out why it didn't load on boot though and how I can make it do that. Also, I ran cat $(readlink -f /sys/class/graphics/fb0/name) and that printed inteldrmfb. So, it appears it is using a framebuffer that is built in to the i915 Intel driver.
How can I get my framebuffer console working?
I'm afraid the core of your problem is very much related to the use of VGA. The problem is that VGA signaling is analog. Basically, the analog signals emitted by the video card on hte VGA port are meant for driving the electromagnetic coils and adjusting the output of the electron gun in an old-school, tube-based monitor. These older monitors didn't have a discrete grid of pixels and rarely was the display flush from edge to edge. With an LCD which is not an analog device, when using VGA the circuitry in the LCD display has to (I'm simplifying the details here dramatically) basically emulate how an old CRT monitor works to convert the incoming signals into data for each of the LCDs pixels. This is by no means a perfect process and at times the LCD circuitry will have trouble synchronizing to the beginning of the VGA signal scanline. Often, LCD displays with analog inputs will have an "auto-adjust" button that will trigger a "reconfiguration/resynchronization" algorithm inside the display. I'd recommend you check your LCDs manual for details. This awkward situation is why people who can are encouraged to switch to DVI where the pixel data is transmitted from the video card to the display digitally. In your type of situation, I've found the problem can often be worked around by changing the resolution and refresh rate used by the video card. For example, at a linux text-console, try the following command: sudo fbset -a 1280x1024-60If that works, then that would confirm your LCD is having trouble synchronizing with your video card's output. I would encourage you to heed the NVIDIA Linux driver's caution and disable the console framebuffer drivers if you mean to use the NVIDIA proprietary driver. If the framebuffer support is important then I'd recommend you consider using the nouveau driver which, given the age of your video card, should work very well. Good Luck.
A long time ago in a galaxy far, far away I had fixed boot screen using this nice solution. Since that time I have been looking for the way to move framebuffer in virtual console slightly to the right and down, relatively to X screen position (or vice-versa), because it is shown in wrong position, and part of symbols cannot be seen. I reproduced look and feel of this problem using Crop tool and fbgrab command:The GRUB menu also hides it's left side behind the screen, but I don't know how to make screenshot there without VirtualBox (and I doubt this screenshot will be useful). However, the main X screen shows itself on the correct place (as it seems to me), and my monitor (ACER AL1916) always autotunes to this position, even if virtual console is shown. I looked in many sources including AskUbuntu, but have found only one solution: switch from VGA cable to DVI, which it is not acceptable, because my graphic card (NVIDIA 6150SE) has only VGA port onboard. I also found that fbset command can adjust screen parameters but need help for getting how does it work. Here is the output of sudo fbset command: mode "1280x1024-77" # D: 131.096 MHz, H: 80.328 kHz, V: 76.649 Hz geometry 1280 1024 1280 1024 32 timings 7628 160 32 16 4 160 4 rgba 8/16,8/8,8/0,8/24 endmodeUPD: during reporting non-affected bug found one line that says: .proc.driver.nvidia.warnings.fbdev: Your system is not currently configured to drive a VGA console on the primary VGA device. The NVIDIA Linux graphics driver requires the use of a text-mode VGA console. Use of other console drivers including, but not limited to, vesafb, may result in corruption and stability problems, and is not supported.So, does anyone know how to adjust position of Virtual Console or X screen relatively to each other?
How can I move framebuffer or X screen relative position?
I finally drop gstreamer and used ffmpeg without any more issues. Command looks like this: ffmpeg -fflags nobuffer -flags low_delay -rtsp_transport tcp -stimeout 1000000 -i <RTSP_stream_addr> -pix_fmt bgra -loglevel
I'm trying to forward video file to the framebuffer on my device that has no X. I'm using gstreamer with fbdevsink plugin.When I test it with gst-launch-1.0 videotestsrc ! fbdevsinkit works perfectly. However when I try to open any video file on my device with command gst-launch-1.0 filesrc location=right_top1.mp4 ! fbdevsinkit stops working immediately with output Setting pipeline to PAUSED ... Pipeline is PREROLLING ... Pipeline is PREROLLED ... Setting pipeline to PLAYING ... New clock: GstSystemClock Got EOS from element "pipeline0". Execution ended after 0:00:00.006988697 Setting pipeline to NULL ... Freeing pipeline ...I cannot figure out what is going on, because even when I add debugging (-v --gst-debug-level=2) output is the same. If it matters, I'm working on Nvidia Jetson Nano with Yocto OS. Do you guys have any idea how to resolve or just debug it?
Got EOS from element "pipeline0" on gst fbdevsink
Turns out it was incorrectly configured framebuffer driver to blame, probably color depth or bit setup. So terminal console just drew itself black on black and ts-calibrate tool wasn't working. Also fbcon wasn't enabled in kernel options. Strangely Qt app worked anyway though.
Trying to calibrate touchscreen for Qt apps with tslib on ARM-device. When running ts_calibrate or ts_test, they both work (i.e. return info about touches) and ts_calibrate successfully calibrates touchscreen when touching the screen somewhere around where checkpoints should be, but the screen is just black. Qt apps (Qt4 ones through the QWS) run fine. Here are export params for tslib: export TSLIB_TSDEVICE=/dev/input/event1 export TSLIB_TSEVENTTYPE=INPUT export TSLIB_CONFFILE=/etc/ts.conf export TSLIB_CALIBFILE=/etc/pointercal export TSLIB_CONSOLEDEVICE=none export TSLIB_FBDEVICE=/dev/fb0 export TSLIB_PLUGINDIR=$TSLIB_PATH/tsts.conf file has just the default values module_raw input module pthres pmin=1 module variance delta=30 module dejitter delta=100 module linearupd: Related issue of our device could be that terminal doesn't draw itself on the screen either, we're working through the COM with it. TL;DR What could be the problem of tslib tools not drawing the picture?
tslib tools don't draw anything on the screen
/dev/fb0 is created by the kernel as soon as the first framebuffer display driver has detected and initialized the respective display controller hardware. If that driver is built into the kernel, it might effectively already be there when userspace processes start running. If you add a udev rule like: SUBSYSTEM=="graphics", KERNEL=="fb0", TAG+="systemd"you should get a *.device unit for it, which you can then use for dependencies. If you add ENV{SYSTEMD_WANTS}+="your.service" to the udev rule, udev will tell systemd to start your service as soon as this device appears, so you could run fbset as a separate service if it suits your plans. At least Debian 12 has console-setup.service run After=console-screen.service kbd.service local-fs.target, but console-screen.service does not seem to be defined. So you might define your own console-screen.service that runs your fbset and console font operations, and have udev trigger it with SUBSYSTEM=="graphics", KERNEL=="fb0", ENV{SYSTEMD_WANTS}+="console-screen.service"as soon as the device becomes available. Then the order of operations would be: /dev/fb0 appears -> your custom console-screen.service runs -> console-setup.service runs. You could then configure the standard console-setup.service to leave the console font alone, and instead set it in your custom console-screen.service.
I want to update the console-setup.service to detect screen resolution using fbset -s and adjust console font size accordingly. For fbset, I need /dev/fb0 to be present, but I don't know which service I should create a dependency on. How is /dev/fb0 created on startup?
Which service creates /dev/fb0 node?
The “hidden relationship” is related to the fact that Linux supports multiple virtual terminals, which means that the framebuffer can be used by a number of different terminals. Programs which manipulate the framebuffer directly need to be aware of which terminal currently owns the framebuffer:When such a program starts, it needs to store the current terminal configuration, then tell the kernel that it wants to control the display directly (it switches to “graphics mode” using the KDSETMODE ioctl) and set the framebuffer up as necessary (e.g. in fbi, configure panning). It also needs to tell the kernel that it wants to be told about virtual terminal switches (when the user presses CtrlAltFn).If the user switches terminals, the kernel will then tell the running program about it; the program needs to restore the terminal settings and relinquish control over the terminal (VT_RELDISP) before the switch can actually proceed.If the user switches back to the terminal running the framebuffer-based program, the kernel again tells the program about it, and the program sets up the terminal and framebuffer as necessary and restores its display.This is described in detail in How VT switching works.
A framebuffer is a device file which allows for a simplified interface to the screen. For example running the below code on a RaspberryPi with a HDMI display connected: cat /dev/urandom > /dev/fb1There are commands (fbi, fim) which allow for injecting full images into the framebuffer. There are multiple resources on the internet (ref1, ref2, ref3) trying to more or less succesfully explain how to add make a systemd service which will result in an image on the screen. A common thread in those resources is the mentioning tty together with the framebuffer. (i.e. both fbi and fim have options to pass them a tty).My assumption was that a tty is a separated concept from a framebuffer. The tty uses the framebuffer to output content to a user, but the framebuffer isn't in any way tied to a tty. Is there a hidden relationship behind a tty and a framebuffer which could explain why commands to print images to a framebuffer seem to depend on a tty?
Relationship between framebuffer and a tty
This is caused by SDL setting the terminal mode to KD_GRAPHICS and input to K_MEDIUMRAW. KD_GRAPHICS causes the terminal to stop drawing to the display, while K_MEDIUMRAW causes the input to be passed as keycodes (not characters). By resetting both values to KD_TEXT and K_XLATE or K_UNICODE, the terminal can be (at least partially) restored. I wrote (based on some existing code) a small program for restoring the console, which seems to work well: https://github.com/hobbitalastair/termfix See also Best practice for hiding virtual console while rendering video to framebuffer and http://lct.sourceforge.net/lct/x60.html.
On an older computer running Linux (using vesafb, musl libc, busybox), Netsurf will occasionally segfault in low memory situations. When it does so, the last image stays visible on the display, and typing seems to do nothing, including trying to switch VT using Ctrl-Alt-Fx. I'd like to know why it does this, and how to fix it. I can happily SSH into the machine and shut it down, and there doesn't seem to be anything in dmesg. This doesn't happen on a machine with an intel GPU, running Arch (systemd). The behavior is similar to that described at re-initialize the framebuffer when program crashes, and is also using SDL. According to https://dvdhrm.wordpress.com/2013/08/24/how-vt-switching-works/, the kernel should be handling VT switching, so this behavior seems quite surprising to me.
Linux framebuffer not reverting to text console when netsurf crashes
I checked on two different (Dell) laptops with i915 family graphic card. In both cases kernel with option described in https://wiki.gentoo.org/wiki/Intel enable mirroring of a laptop screen on a remote monitor. I didn't need to do any configuration. To make clear I was testing a text console, not Xserver. Both laptops didn't have Xserver at all.
I try to 'mirror' a Linux console (not Xserver) from a Dell laptop to an external monitor connected with HDMI cable. The graphic card is Intel UHD 620. What's the best approach? Initial net search indicates that KMS might be helpful. Is that correct? This question seems to be similar to Specify Monitor For Linux Console, but that doesn't have clear answer.
Two screens/monitors in Linux console (FB not Xorg)
I solved this by reading the raw input device and parsing it similarly to https://stackoverflow.com/a/2554421/3530257
I have a raspberry pi running some software (I have the source) that needs user interaction and has a special (USB) keyboard with only 3 keys. The program runs on framebuffer (SDL) and is launched remotely; I need this program to react to the key presses, but seems impossible if the user is not logged in locally. What can I do? The solution must not use a lot of resources, and the delay should be within 300ms. Typical use case is one key press every 10 minutes over the course of 4-8 hours, but can be as often as a key press every 2 seconds (highly unlikely). This all runs on top of raspbian, I have root access .
Use/Grab the only keyboard with no user logged locally
I've been doing some research into this myself, and the short answer seems to be: yes - I need a framebuffer to enable the console. According to the Wikipedia article on the Linux Console, the console has two modes: text mode and framebuffer. From the description, it seems that the text mode is quite basic and may not work with all graphics hardware. So, that leaves the framebuffer console, which is obviously going to require a framebuffer to work. I copied the output of lsmod to a file, for the kernel configuration where I had it working, which shows this when piped to grep fb: $ less lsmod_LFS | grep fb fbcon 49152 70 bitblit 16384 1 fbcon fbcon_rotate 16384 1 bitblit fbcon_ccw 16384 1 fbcon_rotate fbcon_ud 20480 1 fbcon_rotate fbcon_cw 16384 1 fbcon_rotate softcursor 16384 4 fbcon_ud,fbcon_cw,fbcon_ccw,bitblit cfbfillrect 16384 1 drm_kms_helper cfbimgblt 16384 1 drm_kms_helper fb_sys_fops 16384 1 drm_kms_helper cfbcopyarea 16384 1 drm_kms_helper fb 65536 8 fbcon_ud,fbcon_cw,fbcon_ccw,bitblit,softcursor,i915,fbcon,drm_kms_helper fbdev 16384 2 fb,fbconSo, it was using the framebuffer console (fbcon). The next question though is why I can't get the fbcon module to load up any more (which seems to be the reason that nothing is printing to my screen).
I've recently built a Linux From Scratch system on my Apple Macbook laptop; however, I've been struggling to understand the graphics hardware and what kernel driver options I need to enable. The LFS system is (currently) a fairly minimal system that boots up into Bash, but doesn't have the X Window system or any DE. The laptop is a Macbook 2,1 which includes an Intel GMA 950 graphics chip. I have enabled what I believe to be the appropriate driver in the Kernel for this GPU, which is the i915 driver; however, unless I also enable some other options relating to 'framebuffer devices' (I have yet to identify the exact config options), nothing prints on the screen during boot (although, the screen changes to a different shade of black a couple of times). Can someone explain what is going on here? If that i915 driver is the correct one for the GPU, then shouldn't that be enough for the system to print the terminal output to the screen? If not, then what else should I need, other than the i915 driver? I also have Trisquel installed on the same laptop, which boots up fine into the LXDE environment and, according to lsmod, the i915 driver is the correct one and the kernel doesn't seem to be loading any framebuffer-related drivers. I'm confused!
Do I need a framebuffer driver for a minimal CLI system without X?
The short answer You are running the command in one display, and fbset is telling you about another one. These two displays are the framebuffer, which runs the tty? CLI screens, and the display manager, which provides your Gnome session. The longer answer The framebuffer is used while you boot, and for the text consoles you typically get to with Alt-F1, F2, etc. Gnome is a display manager that also provides an X server for graphics applications. They are mostly independent of one another, but in most systems they do share a common "Direct Rendering Manager" or DRM driver. This allows you to swap between them without having to reset the video hardware or see strange graphics garbage (like we had to do years ago when X-servers ran completely in user space and talked directly to hardware). It also allows for a nice smooth transition from the framebuffer-based boot screen to the display manager-based greeting screen. You may find that when you Alt-F2 to get to the framebuffer tty2 console, then login and try playing with fbset it will make a lot more sense. Although I need to conclude with a bit of warning. It may still not work like you think it should. Many modern frame buffers don't actually change the hardware resolution, but only change a "window" against that screen. So, you can select a lower resolution in fbset, but it will not make the images larger, it will only limit the text output to a smaller block aligned to the top left of your screen. If someone could answer how to get that to work properly, I'd be very interested. If you really want to dig deeper, then check out this site. There is a nice picture that ties everything together there.
I'm exploring the linux frame buffer, /dev/fb0, and when I run sudo fbset -i from a virtual console in Gnome 3 (using Terminator) on Fedora 23, it reports the dimensions of the frame buffer as 1280x768, but my Gnome desktop resolution is 1680x1050. Why is fbset telling me that the frame buffer is 1280x768? Full output of fbset -i: mode "1280x768" geometry 1280 768 2048 2048 32 timings 0 0 0 0 0 0 0 rgba 8/16,8/8,8/0,0/0 endmodeFrame buffer device information: Name : svgadrmfb Address : (nil) Size : 16777216 Type : PACKED PIXELS Visual : TRUECOLOR XPanStep : 1 YPanStep : 1 YWrapStep : 0 LineLength : 8192 Accelerator : No
Why does fbset -i report a different resolution?
Boot Level Vs. Kernel Level From the OP's Comment:@eyoung100 I'm able to view and select EFI GOP modes in the grub2 menu. But there are only two modes available: 800x600 and 1024x768. My display is 1920x1080, and these modes look awfully on it (especially if I use Xorg with efifb). That's why I can't use EFI framebuffer...I believe that the OP is trying to fix a boot level issue with a kernel level fix. Some things to remember:The GOP resolutions and EFI Variables are stored in the computer's NVRAM as read-only. The only time the read-only protection is removed is after the computer boots. Theoretically, the values can be changed, but the only time I've ever seen them updated is with a tools like efivars and versions of grub While the above tools can access the EFI values to change things like the boot order, I've never seen, or been able to Google anyone ever using the tools to access and change the GOP or Graphics Output Protocol. As one can see from the link in 4, the Graphics port is coded into the UEFI shell. The advantage of this is that it removes the reliance on hardware in order to create the display output, as noted in this PDF.What's Next Because the GOP is only accessible in a pre-boot environment, the resolution must be set in the pre-boot environment a.k.a. the UEFI Shell, which the OP seems to have beaten me to:@eyoung100 I was able to fully disable CSM in UEFI, and then in grub2 I saw about six new modes for GOP! One of them was my full-resolution mode! It worked fine. I booted in the correct mode with /dev/fb0 provided by efi-framebuffer.0As such, I'm providing a method to create a UEFI Shell for future readers. One can use this method to perform shell scripting (to set-up the way the PC boots), set resolutions, repair broken boot options and more, without any OS. I could write them out here, but for length and brevity sake, follow steps 1 through 7 at: KilianKegel/Howto-create-a-UEFI-Shell-Boot-Drive. - Note that you can replace Step 3 with a more recent stable version by Downloading a shell.efi file from the EDK2 Releases Page. Determining Available ResolutionsBoot from the newly created USB stick. After letting startup.nsh finish, at the prompt type gop mode A list will appear displaying 3 columns and a row which will be asterisked**.Starting on the left, the first column is the choice number The Second column is the number of characters per column per screen The Third Column is the number of Rows per ScreenUse gop set x where x is the number corresponding with your chosen choice***. Reboot to save the choice***.Notes: ** I believe the First Choice is 80x25 which if I'm remembering right equates to a resolution of 640x480. *** It may take multiple reboots and sets before selecting the desired outcome. Now That My Resolution Is Set As the OP noted via the comment above make sure that when using a completely UEFI based system, CSM is completely off. CSM was put in to emulate BIOS based systems until the UEFI protocol was completely accepted.One of those emulations was the VBIOS in graphics cards. Until the OS took over the boot process the VGA BIOS protocol instructed the card to boot to a video mode no greater than 1024x768. After the OS took over, the driver was switched to the mode that was set via the OS settings, i.e., the Driver you Download from your Graphics Card's manufacturer or a third-party OEM like PNY takes over.In the case of my question and answer, to completely disable CSM, I had to Enable WHQL Support for Windows 8.1/10 so all my OS'es could see the GOP aware NVIDIA GeForce 1070 in my system, even though FreeBSD has no need for Windows Driver signing. I'll let the OP comment how they were able to find the 6 additional resolutions below, as I spent about an hour searching through screenshots of the surprisingly common Aptio Setup Utility before I started writing this answer, and was unable to find a written procedure. We're Now at the Kernel Level...There is another new problem though, likely caused by disabling CSM. I can read/write to /dev/fb0 and it displays correctly. But if I run Xorg, it refuses to detect my framebuffer device, no matter what I do to the configuration file (or the total lack of one). It says No screens found after few attempts to invoke fbdevhwThe OP now needs to use their OS'es package manager (they never stated the OS but I'm going to guess at a Debian based one), and issue a: sudo apt nvidia-driver to install the last supported version of the proprietary binary blob which will be 470.xx.xx Included in that download/install should be a tool nvidia-xconfig. If it's not there, try: sudo apt nvidia-xconfig That tool will write a xorg.conf file that can then be modified by hand. Unfortunately, because of the proprietary driver the auto-detection in X.Org will attempt to install the nv driver, which is worse than the nouveau driver, and the nouveau driver provides no 3D acceleration. Adding this Based on Comments Below I realize the OP wants to run X.Org in the fbdev device. The reason I answered this post the way I did (advising the OP to use either the NVIDIA proprietary or nouveau driver) is because the NVIDIA devs refuse to create code that allows their Graphics Cards to "attach" to the fbdev or fbcon kernel drivers, as evidenced by the answer directly from a Developer all the way back in 2016. Note that the version of the driver is a moot pint here, because as I stated in my comments below:With the advent of UEFI, the ability to use the uvesa kernel module and pass vga=xxx on the kernel command line flew out the window.This was backed up by birdie's answer. The user community has been complaining about this issue long before that post, and has asked multiple times that NVIDIA release an open-source driver that the community can refine, but seeing as NVIDIA is a for profit company, I don't see that happening anytime soon.
When I boot my Linux machine with UEFI and grub2, I get only few graphic modes (resolution modes) available. And both of them are really smaller than my monitor/screen. For example, the boot console resolution gets set to 800x600 while my screen is 1920x1080, and the picture looks really bad. My GPU is Nvidia, but I don't want to install, insert or use any GPU-specific drivers. At least, not at boot.I am interested in changing the initial EFI/VGA boot console (and GRUB2) resolution to match my full screen.Some sources say that I can not, because it is only possible to use resolutions supported by the GPU in VESA-extensions mode. Is it true? Does it apply to my case? Or do I just need to change some UEFI settings? I would like to have the full resolution enabled right from the moment when GRUB starts. Is it possible?
Low resolution in the EFI/VGA early boot framebuffer/console (and in GRUB)
Xorg's FBDEV driver requires a BusID option to be passed in Config, not only a path to the framebuffer's char-device. I don't know why is that, but here is how to configure it: First is to figure out the "bus id" of the framebuffer device. Assuming that the wanted framebuffer device is fb1: ls /sys/class/graphics/fb1/device/driverThe example output (the output in my case) is: bind module uevent unbind vfb.0From this list of entries you should ignore (don't pay attention) to bind, module, uevent, unbind and ANYTHING_id (if exists). Then you're left with exactly that "bus id" of your Framebuffer. (In my case, vfb.0). Here is a different example with my fb0 device, by the way, which is a real FB from nouveaudrmfb: # ls /sys/class/graphics/fb0/device/driver0000:03:00.0 bind module new_id remove_id uevent unbindIn this case, you can see that the "bus id" is 0000:03:00.0. Knowing the Bus ID, you can finally configure the FBDEV driver in the Xorg conf: Section "Device" Identifier "Device0" Driver "fbdev" BusID "vfb.0" Option "fbdev" "/dev/fb1" EndSectionThis is an example configuration for a fb1 device with vfb.0 BusID. That's it.
I'm using Xorg with the FBDEV driver, configuration: Section "Device" Identifier "Device0" Driver "fbdev" Option "fbdev" "/dev/fb0" Option "ShadowFB" "false" EndSectionI got a new framebuffer device to my system, it's /dev/fb1. I adjusted the config: Section "Device" Identifier "Device0" Driver "fbdev" Option "fbdev" "/dev/fb1" Option "ShadowFB" "false" EndSectionBut it doesn't work, it still uses /dev/fb0, and doesn't even open /dev/fb1. I'm using Ubuntu-based (jammy-based) OS with xserver-xorg-video-fbdev package installed. Everything works if I do mount --bind /dev/fb1 /dev/fb0But it's not an option because I want to have access to both the framebuffers (so I did umount /dev/fb0 to undo it). Thanks for any help
Xorg FBDEV refuses to use the specified framebuffer
Not quite. iomem entries describe physical address uses; system RAM is marked as such, anything else describes ranges of addresses which aren’t used for RAM. The “Video ROM” entry points to the actual ROM (well, flash in practice). On x86 it is mapped on systems using a BIOS or CSM; systems booted through UEFI might not have it. You’ll see that on x86 it’s always mapped at 0xC0000: $ sudo cat /proc/iomem|grep "Video ROM" 000c0000-000cfdff : Video ROMThat’s where the video BIOS has always been mapped, and various (old) pieces of software need it to be there.It is typically not copied into RAM. (It used to be, into “shadow RAM”, but that’s not done any more.)On current systems, the code isn’t used (it used to be, in a limited number of scenarios, but as far as I’m aware it isn’t any more, except by vesafb on 32-bit x86).The kernel doesn’t need to get rid of it, it doesn’t take away any memory and it lives in a range of addresses which is ignored in any case (the lower part of the physical address space is effectively unused on x86).
Given some x86_64 linux stock kernel running one single GPU embedded into whatever pci-e extension board.cat-ing /proc/iomem, I can realize that some space is reserved in RAM and associated to the Video ROM.Can I rightly assume that this is nothing but an exact copy of the BIOS (or equivalent) code for dealing with whatever legacy VGA device plugged into the ISA bus or whatever PCI device capable of decoding legacy VGA IO and/or MEM ? Why is this code copied into RAM ? In order to allow faster access times? In order to keep it runable after kernel will have switched into protected mode? Can I righly assume that this code will only be used as long as whatever in-kernel dedicated framebuffer driver (fbcon, vgafb, vesafb…) does not take precedence ? In case whatever framebuffer driver is made available, can this code be of later use in the lifetime of the running kernel ? If not then why the linux kernel does not, at boot time, simply get rid of that region when freeing unused memory ?
Video rom part of iomem usage
Looks like this was an issue with my setup -- redshift seems to block \e]R from working.
I'm trying to change the colors in the virtual terminal. So far I've tried:echo -en "\e]PXYYYYYY"-style escapes writing to /sys/module/vt/parameters/default{red,grn,blu} the PIO_CMAP ioctlAt this point I'm suspecting that there's a kernel feature I'm missing. My current kernel config is here, my uname -r is 4.9.95-gentoo.
Kernel Feature needed to change framebuffer colors?
In case this can be of any help for other people, I was able to boot in VGA mode with the following change in /etc/default/grub: GRUB_CMDLINE_LINUX_DEFAULT="nomodeset"instead of GRUB_CMDLINE_LINUX_DEFAULT="quiet"This enables all the startup messages and, for some reason, also keeps the console in VGA resolution. I found this bit of relevant advice at https://linuxconfig.org/how-to-increase-tty-console-resolution-on-ubuntu-18-04-server As far as I can see, the VGA resolution can be set with either GRUB_GFXMODE=640x480 GRUB_GFXPAYLOAD_LINUX=keepor GRUB_GFXMODE="" GRUB_GFXPAYLOAD_LINUX=640x480
In order to port an embedded project from an ARM target to Linux/x86, I have to create a Debian VM (I'm using Virtualbox right now) which starts in framebuffer mode with 640x480 resolution. I used the systemctl set-default multi-user.target command to make the VM start on framebuffer, but it seems it cannot go below 800x600 resolution. All tutorials and guide I can find are related to starting the guest VM at high resolution modes, while I cannot find anything tackling with my issue. I followed the suggestions found at https://forums.virtualbox.org/viewtopic.php?f=29&t=83189 edit /etc/default/grub Uncomment: #GRUB_GFXMODE=640x480 Modify it to required resolution: GRUB_GFXMODE=1024x768 Add the following: GRUB_GFXPAYLOAD_LINUX=keep Save, exit, and run update-grub Edit "/etc/modprobe.d/fbdev-blacklist.conf" and add vboxvideo RebootUsing GRUB_GFXMODE=640x480 and creating the /etc/modprobe.d/fbdev-blacklist.conf file, but with no results - it keeps starting at 800x600 Can anyone help on this? I am currently using Debian 9, can move to another version in case of incompatibilities. EDIT: as requested, here is the output of #cat /proc/cmdline: BOOT_IMAGE=/boot/vmlinuz-4.9.0-11-amd64 root=UUID=5bb1ded6-45a6-4d13-93d8-5f593e66e609 ro quiet
Cannot force Debian to start in Framebuffer 640x480 resolution
Solved it myself. There seems to be very little information about the networking stuff that you can do with Linux, so I have decided to document and explain my solution in detail. This is my final setup:3 NICs: eth0 (wire), wlan0 (built-in wifi, weak), wlan1 (usb wifi adapter, stronger signal than wlan0) All of them on a single subnet, each of them with their own IP address. eth0 should be used for both incoming and outgoing traffic by default. If eth0 fails then wlan1 should be used. If wlan1 fails then wlan0 should be used.First step: Create a new route table for every interface in /etc/iproute2/rt_tables. Let's call them rt1, rt2 and rt3 # # reserved values # 255 local 254 main 253 default 0 unspec # # local # #1 inr.ruhep 1 rt1 2 rt2 3 rt3Second step: Network configuration in /etc/network/interfaces. This is the main part and I'll try to explain as much as I can: auto eth0 wlan0 allow-hotplug wlan1iface lo inet loopbackiface eth0 inet static address 192.168.178.99 netmask 255.255.255.0 dns-nameserver 8.8.8.8 8.8.4.4 post-up ip route add 192.168.178.0/24 dev eth0 src 192.168.178.99 table rt1 post-up ip route add default via 192.168.178.1 dev eth0 table rt1 post-up ip rule add from 192.168.178.99/32 table rt1 post-up ip rule add to 192.168.178.99/32 table rt1 post-up ip route add default via 192.168.178.1 metric 100 dev eth0 post-down ip rule del from 0/0 to 0/0 table rt1 post-down ip rule del from 0/0 to 0/0 table rt1iface wlan0 inet static wpa-conf /etc/wpa_supplicant.conf wireless-essid xyz address 192.168.178.97 netmask 255.255.255.0 dns-nameserver 8.8.8.8 8.8.4.4 post-up ip route add 192.168.178.0/24 dev wlan0 src 192.168.178.97 table rt2 post-up ip route add default via 192.168.178.1 dev wlan0 table rt2 post-up ip rule add from 192.168.178.97/32 table rt2 post-up ip rule add to 192.168.178.97/32 table rt2 post-up ip route add default via 192.168.178.1 metric 102 dev wlan0 post-down ip rule del from 0/0 to 0/0 table rt2 post-down ip rule del from 0/0 to 0/0 table rt2iface wlan1 inet static wpa-conf /etc/wpa_supplicant.conf wireless-essid xyz address 192.168.178.98 netmask 255.255.255.0 dns-nameserver 8.8.8.8 8.8.4.4 post-up ip route add 192.168.178.0/24 dev wlan1 src 192.168.178.98 table rt3 post-up ip route add default via 192.168.178.1 dev wlan1 table rt3 post-up ip rule add from 192.168.178.98/32 table rt3 post-up ip rule add to 192.168.178.98/32 table rt3 post-up ip route add default via 192.168.178.1 metric 101 dev wlan1 post-down ip rule del from 0/0 to 0/0 table rt3 post-down ip rule del from 0/0 to 0/0 table rt3If you type ip rule show you should see the following: 0: from all lookup local 32756: from all to 192.168.178.98 lookup rt3 32757: from 192.168.178.98 lookup rt3 32758: from all to 192.168.178.99 lookup rt1 32759: from 192.168.178.99 lookup rt1 32762: from all to 192.168.178.97 lookup rt2 32763: from 192.168.178.97 lookup rt2 32766: from all lookup main 32767: from all lookup default This tells us that traffic incoming or outgoing from the IP address "192.168.178.99" will use the rt1 route table. So far so good. But traffic that is locally generated (for example you want to ping or ssh from the machine to somewhere else) needs special treatment (see the big quote in the question). The first four post-up lines in /etc/network/interfaces are straightforward and explanations can be found on the internet, the fifth and last post-up line is the one that makes magic happen: post-up ip r add default via 192.168.178.1 metric 100 dev eth0Note how we haven't specified a route-table for this post-up line. If you don't specify a route table, the information will be saved in the main route table that we saw in ip rule show. This post-up line puts a default route in the "main" route table that is used for locally generated traffic that is not a response to incoming traffic. (For example an MTA on your server trying to send an e-mail.) The three interfaces all put a default route in the main route table, albeit with different metrics. Let's take a look a the main route table with ip route show: default via 192.168.178.1 dev eth0 metric 100 default via 192.168.178.1 dev wlan1 metric 101 default via 192.168.178.1 dev wlan0 metric 102 192.168.178.0/24 dev wlan0 proto kernel scope link src 192.168.178.97 192.168.178.0/24 dev eth0 proto kernel scope link src 192.168.178.99 192.168.178.0/24 dev wlan1 proto kernel scope link src 192.168.178.98We can see that the main route table has three default routes, albeit with different metrics. The highest priority is eth0, then wlan1 and then wlan0 because lower metric numbers indicate a higher priority. Since eth0 has the lowest metric this is the default route that is going to be used for as long as eth0 is up. If eth0 goes down, outgoing traffic will switch to wlan1. With this setup we can type ping 8.8.8.8 in one terminal and ifdown eth0 in another. ping should still work because because ifdown eth0 will remove the default route related to eth0, outgoing traffic will switch to wlan1. The post-down lines make sure that the related route tables get deleted from the routing policy database (ip rule show) when the interface goes down, in order to keep everything tidy. The problem that is left is that when you pull the plug from eth0 the default route for eth0 is still there and outgoing traffic fails. We need something to monitor our interfaces and to execute ifdown eth0 if there's a problem with the interface (i.e. NIC failure or someone pulling the plug). Last step: enter ifplugd. That's a daemon that watches interfaces and executes ifup/ifdown if you pull the plug or if there's problem with the wifi connection /etc/default/ifplugd: INTERFACES="eth0 wlan0 wlan1" HOTPLUG_INTERFACES="" ARGS="-q -f -u0 -d10 -w -I" SUSPEND_ACTION="stop"You can now pull the plug on eth0, outgoing traffic will switch to wlan1 and if you put the plug back in, outgoing traffic will switch back to eth0. Your server will stay online as long as any of the three interfaces work. For connecting to your server you can use the ip address of eth0 and if that fails, the ip address of wlan1 or wlan0.
I would like to have multiple NICs (eth0 and wlan0) in the same subnet and to serve as a backup for the applications on the host if one of the NICs fail. For this reason I have created an additional routing table. This is how /etc/network/interfaces looks: iface eth0 inet static address 192.168.178.2 netmask 255.255.255.0 dns-nameserver 8.8.8.8 8.8.4.4 post-up ip route add 192.168.178.0/24 dev eth0 src 192.168.178.2 post-up ip route add default via 192.168.178.1 dev eth0 post-up ip rule add from 192.168.178.2/32 post-up ip rule add to 192.168.178.2/32iface wlan0 inet static wpa-conf /etc/wpa_supplicant.conf wireless-essid xyz address 192.168.178.3 netmask 255.255.255.0 dns-nameserver 8.8.8.8 8.8.4.4 post-up ip route add 192.168.178.0/24 dev wlan0 src 192.168.178.3 table rt2 post-up ip route add default via 192.168.178.1 dev wlan0 table rt2 post-up ip rule add from 192.168.178.3/32 table rt2 post-up ip rule add to 192.168.178.3/32 table rt2That works for connecting to the host: I can still SSH into it if one of the interfaces fails. However, the applications on the host cannot initialize a connection to the outside world if eth0 is down. That is my problem. I have researched that topic and found the following interesting information: When a program initiates an outbound connection it is normal for it to use the wildcard source address (0.0.0.0), indicating no preference as to which interface is used provided that the relevant destination address is reachable. This is not replaced by a specific source address until after the routing decision has been made. Traffic associated with such connections will not therefore match either of the above policy rules, and will not be directed to either of the newly-added routing tables. Assuming an otherwise normal configuration, it will instead fall through to the main routing table. http://www.microhowto.info/howto/ensure_symmetric_routing_on_a_server_with_multiple_default_gateways.htmlWhat I want is for the main route table to have more than one default gateway (one on eth0 and one on wlan0) and to go to the default gateway via eth0 by default and via wlan0 if eth0 is down. Is that possible? What do I need to do to achieve such a functionality?
Is it possible to have multiple default gateways for outbound connections?
ifconfig is not the correct command to do that. You can use route like in route add default gw 192.168.0.254 for example. And if route is not present, but ip is, you can use it like this: ip route add default via 192.168.0.254 dev eth0, assuming that 192.168.0.254 is the ip of your gateway
I'm trying to configure the network interface on embedded linux using ifconfig: ifconfig eth0 192.168.0.101 netmask 255.255.255.0but I don't know how to add the default gateway as an ifconfig parameter, Any Ideas?
How to set the Default gateway
You do not need a gateway entry for any NIC that you don't want to use to reach a network not in its collision domain (192.168/16 in this case). You can just omit that line if you don't want a gateway for that NIC. I'm not sure what will happen if you try to use loopback as a gateway, but I wouldn't expect it to be happy times.
Simple setup here. I have a machine with multiple network interfaces, two for example - eth0 and eth1. eth0 has a static address and has a default gateway assigned. eth1 has a static address and will not have a gateway on that interface's network address range. The Question Do I need an entry in network configuration file (/etc/network/interfaces) for the gateway option on the interface that does not have a gateway on its network, eth1 in the above example? Additional Questions If I do something like: gateway 127.0.0.1Will this have adverse effects? Will this interface now have a way to reach a gateway or will using the loopback interface as a gateway have no effect (i.e. same as leaving the gateway option off entirely)? Example config for discussion /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback# Interface 1 allow-hotplug eth0 iface eth0 inet static address 10.1.10.200 netmask 255.255.255.0 gateway 10.1.10.1# Interface 2 allow-hotplug eth1 iface eth1 inet static address 192.168.100.1 netmask 255.255.0.0 gateway 127.0.0.1 # This is the line in question
Static IP address without a gateway
The two settings do the same thing. The -W option was added in 2010 and is described as a “netcat mode”. Use ssh -W if you don't need compatibility with versions of OpenBSD prior to 4.7 or with portable OpenSSH prior to 5.5 (I think). Use nc if you do need to support older versions of OpenSSH. ssh -W is preferable if available because it's marginally more efficient and doesn't require a separate utility to be installed.
What is the difference between the following? Host foo ProxyCommand ssh example.com -- /usr/bin/nc %h %p 2> /dev/nulland Host foo ProxyCommand ssh -W %h:%p example.comWhich one should I prefer when? Is either of them faster or more efficient in some way?
SSH Gateway in ~/.ssh/config
TL;DR (1st method only) On Desktop: ip route add 192.168.10.0/24 dev eth0 table 1000 ip route add default via 192.168.10.1 dev eth0 table 1000 ip rule add iif lo ipproto tcp sport 22 lookup 1000The problem The problem here happens on the Desktop. With a different layout where the NUC reliably intercepts all flows easier methods would have been available. This would have required the NUC to have two network devices because routing two IP LANs on the same Ethernet LAN doesn't prevent issues for example with DHCP. Having the NUC as a stateful bridge would have been an other solution also requiring two NICs. With the current layout, where the NUC can't intercept all traffic between the AP and the desktop... ... the solution has to be done on the Desktop. Linux can use policy routing where a selector is used to have a different outcome (by using a different routing table) for the packet. All problems about using multiple routes for apparently same destinations require the use of policy routing, mostly using a selector able to separate according to the source (because the routing table is already here to separate the destination). One has to separate somehow the packets coming directly from the AP from the packets coming from the NUC, so they can have a different outcome (ie: different routes) when it's about SSH connections to the Desktop. What doesn't appear to be available with ip rule is a selector where one can distinguish between two packets arriving through two routes when those routes differ only with the gateway that was used. Linux' policy rules don't appear to catch this case: as long as its from the same interface it's the same. I'll assume that:Desktop's network interface is called eth0. Desktop isn't routing (eg: libvirt, LXC, Docker). Routing requires more configuration and to choose what should be done (should a VM receive SSH coming from the NUC or from the AP?). The answers below would need some minor adjustments for properly creating exceptions for the routing case, or containers/VMs will just follow the default route (ie: through NUC).Here are two methods. Policy routing matching layer 4 protocol (TCP port 22) Since Linux 4.17 one can use a selector to match here on TCP port 22 with policy routing. Then it's easy to use a different route for it. Instead of handling the origin of the packet differently, handle this specific port differently: ip route add 192.168.10.0/24 dev eth0 table 1000 ip route add default via 192.168.10.1 dev eth0 table 1000 ip rule add iif lo ipproto tcp sport 22 lookup 1000Here iif lo isn't really about the lo interface but is the specific syntax meaning from local system. The LAN route must also be duplicated, or for example an SSH connection from the NUC itself would be replied through the AP, which would emit ICMP redirects to tell about the misconfiguration. In this specific case there's no rule needed to specify an alternate route for received packets since it's the same interface. Had it been an other interface and SRPF enabled (rp_filter=1), ip rule add iif eth0 ipproto tcp dport 22 lookup 1000 with eth0 replaced with the actual other interface in rule and default route would also have been needed. This is a very simple method achieving goal in 3 commands only. This could be tweaked for receiving SSH from some specific LAN or address blocks coming from the NUC in case the VPN allows incoming traffic, but this wouldn't allow in any case receiving an SSH connection from the same single public IP source which used the two destinations/routes simultaneously. Using the AP's MAC address and marks for policy routing Instead of the previous method, there's an indirect way to identify an incoming packet as coming from the AP gateway rather than from the NUC: its Ethernet source MAC address. This can't be used directly by policy routing, but it's possible to tag such incoming packet with a firewall mark. A mark can be used by policy routing, and there are ways to get this mark set on reply packets. I'll split the incoming part and the reply part. As this doesn't depend on the specific kind of incoming traffic, no change is required to handle additional ports forwarded from the AP to the Desktop later. I'll assume below that:AP's MAC address (as seen on the desktop with ip neigh show 192.168.10.1 after pinging it) has value 02:00:00:ac:ce:55. Replace this value below.Incoming and common settings One should take a look at how Netfilter, iptables and routing interact on this schematic:An iptables rule in raw/PREROUTING will mark the packet. This is then completed by policy routing in a similar way to previous. iptables -t raw -A PREROUTING -i eth0 -m mac --mac-source 02:00:00:ac:ce:55 -j MARK --set-mark 1ip route add default via 192.168.10.1 table 1000 ip rule add fwmark 1 lookup 1000Reply There are two methods to handle reply:Simple and automatic, TCP-only Can only be used with TCP, not other protocols, including not UDP. As the goal is TCP port 22, this is good enough for OP's case. Simply complete the Incoming part with: sysctl -w net.ipv4.tcp_fwmark_accept=1 sysctl -w net.ipv4.fwmark_reflect=1Explanations:tcp_fwmark_accept Each TCP socket created when accepting a new connection will inherit the first packet's mark, as if the SO_MARK socket option had been used for this connection only. Specifically here, all reply traffic will be routed back through the same gateway the incoming traffic arrived from, using the routing table 1000 when the mark is set.fwmark_reflect In a similar way reply packets handled directly by the kernel (like ICMP echo reply or TCP RST and some cases of TCP FIN) inherit the incoming packet's mark. For example that's the case if there is no TCP socket listening (ie: the SSH server is stopped on Desktop). Without this mark an SSH connection attempt through the AP would time out instead of getting a Connection Refused because the TCP RST would be routed through the NUC (and be ignored by the remote client).or instead...Generic handling by transferring the mark between packet and conntrack entry and back to reply packet A mark can be memorized as connmark in a conntrack entry to have it affect all further packets of the flow including reply packets by copying it back in mangle/OUTPUT from conntrack to mark. Complete the Incoming part with: iptables -t mangle -A PREROUTING -m mark --mark 1 -j CONNMARK --set-mark 1 iptables -t mangle -I OUTPUT -m connmark --mark 1 -j MARK --set-mark 1This will handle all cases (including TCP RST and UDP). So the AP could be configured to forward any arbitrary incoming TCP or UDP traffic to the Desktop. Additional documentation in this blog.Miscellaneous CaveatsWhen an address is removed (and then probably added back) or an interface is brought down (then up), all associated routes that were manually added are deleted and won't reappear. So the manual ip route commands at least should be integrated with the tool configuring the Desktop's network so they are added when the network connection is made each time.Each tool has a different way to do advanced network configuration, which might be incomplete. For example Ubuntu's Netplan doesn't document in its routing-policy settings if it's possible to use iif lo or ipproto tcp sport 22. Tools allowing to use custom scripts to replace non-available features should be preferred (for example ifupdown or NetworkManager can do this).Nitpicking: for the the extremely convoluted case using the last method where a single remote (public) IP address will connect to the same Desktop service twice using the two routes (seen as two distinct public IP addresses) in case the VPN allows incoming traffic, and uses the same source port for both destinations, the Desktop will only see twice the same flow and will be confused (two UDP would be merged and a 2nd TCP would fail). This can usually be handled when routing (with conntrack zones and/or having automatically conntrack alter a source port), it might not be possible to handle this for the host case here.Bonus If Desktop is actually a router, here's how the last method using a mark and CONNTRACK should be altered. Routes to containers must be duplicated to table 1000. This should work, but has not been tested with Docker (which can give additional challenges). Assuming here that:Desktop is routing NAT-ed containers in LAN 172.17.0.0/16 through an interface called br0 (Docker would use docker0 for the default network) with local IP address 172.17.0.1/16 Desktop DNATs some ports toward these containersChanges:rules and routes Routes to container(s) must be copied from the main routing table to table 1000. If the container/virtualization tool dynamically adds new interfaces and routes, the new routes must manually (or with some scripted mechanism triggered from some API from the tool) be added in table 1000 too. ip route add 172.17.0.0/16 dev br0 table 1000Without this, incoming connections through AP and marked (in the next bullets) would be routed back to the AP.keep the previous rule about MAC address in raw table.delete the previous rules in mangle table iptables -t mangle -Fput these rules instead: iptables -t mangle -A PREROUTING -m mark ! --mark 0 -j CONNMARK --save-mark iptables -t mangle -A PREROUTING -m connmark ! --mark 0 -j CONNMARK --restore-mark iptables -t mangle -A OUTPUT -m connmark ! --mark 0 -j CONNMARK --restore-mark(some optimizations could be done at the cost of more lines for this single-mark case) The first PREROUTING rule ensures to not overwrite the conntrack mark with the packet mark with value 0. The 2nd PREROUTING rule sets the mark for routed traffic from containers (with individual packet initially not marked) part of a flow initially established through AP.
Let me try to explain my home network setup: ┌────────────────────┐ │ Internet │ │ Public IP: 1.2.3.4 │ └──────────┬─────────┘ │ ┌──────────────────┴─────────────────┐ │ ISP Modem │ │ Forward everything to AP Router │ │ 192.168.1.1 │ └──────────────────┬─────────────────┘ │ ┌─────────────────┴───────────────┐ │ AP Router │ │ DHCP happens here │ │ Forward 1122 to 192.168.10.2:22 ├─────────────┐ │ 192.168.10.1 │ │ └─────────────────┬───────────────┘ │ │ │ │ │ │ ┌───────┴───────┐ │ │ NUC (Ubuntu) │ │ │ PiHole + VPN │ │ │ 192.168.10.50 │ │ └───────────────┘ │ ▲ │ │ ┌────────────────────┴──────────────────┐ │ │ Desktop (Ubuntu) │ │ Default routing │ 192.168.10.2 │ │ │ Default gateway: 192.168.10.50 ├──────────┘ │ DNS: 192.168.10.50 │ └───────────────────────────────────────┘If the desktop uses 192.168.10.1 as the default gateway, doing, for example, SSH to 1.2.3.4:1122 works, I can SSH to the desktop. But I want the desktop to use 192.168.10.50 as the default gateway. In that case, any port forwarding does not work. After doing a little bit of research this can be done with IP tables/policy based routing, but I know nothing about that. What's the simplest way to do it?
Port forwarding does not work using different gateway
Your broadcast address should be 192.168.1.255, not 192.168.1.1.
I have this server for a while, and was present on other questions. A while ago, I've changed the network and the gateway IP was changed too. Since then, there is no internet on this machine. I need access to the internet to update the machine and (sometimes) to install packages I need for development. What I've tried:route add default gw 192.168.1.1 (https://unix.stackexchange.com/a/259046/133591) ip route replace default via 192.168.1.1 (https://unix.stackexchange.com/a/199070/133591) ip route add default via 192.168.1.1 dev eth0 (https://unix.stackexchange.com/a/259048/133591) Editing the file /etc/network/interfaces, to look like below: # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5).source /etc/network/interfaces.d/*# The loopback network interface auto lo iface lo inet loopback# The primary network interface auto eth0 allow-hotplug eth0 #iface eth0 inet dhcp iface eth0 inet static address 192.168.1.205 netmask 255.255.255.0 gateway 192.168.1.1 broadcast 192.168.1.1And this is the result of all my attempts: root@webtest:~# route add default gw 192.168.1.1 SIOCADDRT: Network is unreachable root@webtest:~# ip route replace default via 192.168.1.1 RTNETLINK answers: Invalid argument root@webtest:~# ip route add default via 192.168.1.1 dev eth0 RTNETLINK answers: Invalid argument root@webtest:~#The most bizarre thing is the SIOCADDRT: Network is unreachable error, when I'm clearly connected using SSH, which used the network. What else should I try? I don't even know what else to do. My system is running Debian 8.2 x64, with a single interface network.Note: I have read How can I change the default gateway? and How to set the Default gateway (which is where I got all those tries from). The accepted answer on How can I change the default gateway? is a FreeBDS-exclusive answer.Running ip addr and ip route gives the following: root@webtest:~# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:1a:92:47:00:b5 brd ff:ff:ff:ff:ff:ff inet 192.168.1.205/24 brd 192.168.1.1 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::21a:92ff:fe47:b5/64 scope link valid_lft forever preferred_lft forever root@webtest:~# ip route 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.205 root@webtest:~#Edit 1: After the change that @Johan Myréen suggested, the result is still the same. Below is the updated ip addr with 2 pings: root@webtest:~# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:1a:92:47:00:b5 brd ff:ff:ff:ff:ff:ff inet 192.168.1.205/24 brd 192.168.1.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::21a:92ff:fe47:b5/64 scope link valid_lft forever preferred_lft forever root@webtest:~# ip route default via 192.168.1.1 dev eth0 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.205 root@webtest:~# ping google.com ping: unknown host google.com root@webtest:~# ping facebook.com ping: unknown host facebook.com root@webtest:~#
Can't change the default gateway with static IP
You need to create a second routing table and use policy based routing. Applied to your case you need to:Setup the first default route using the main routing table. This table will be used for the traffic generated locally and for the traffic from wlan1 : ip route add default via <gateway_reachable_by_eth0> table mainCreate a second routing table vpn: echo 200 vpn >> /etc/iproute2/rt_tablesAdd a default route to the new table: ip route add default via <gateway_reachable_by_tun0> table vpnIndicate that all traffic from wlan0 should use this new table: ip rule add from <wlan0_subnet> lookup vpn
I am building a router with a RPi (Raspbian). It has 3 network interfaces:eth0: Connected to the Internet (IP/Gateway from DHCP) wlan0, wlan1: local WLAN interfaces (each serving its own SSID as AP)Moreover a have a VPN connection tun0 to a remote network, which is connected to the internet itself. Now I want:all traffic from wlan0 to be routed through tun0 and all traffic from wlan1 to be routed through eth0In the result I want to have two WLANs, one with direct internet access and one with internet access through the VPN connection. This was very easy using two different devices, but how to do this with only one default gateway?
Configure two Routers on one Device
does the Linux kernel support setting to check an IP through a gateway and using the state of that route to also change my gateways?No. Something like that is not in scope for the kernel. You should do it in userspace. The usual kind of software daemon that has the job of controlling and maintaining your routing table probably doesn't support it either because it runs standards-based routing protocols like OSPF and BGP to learn the correct routes to install from neighbouring routers, not "ping something remote and set default route accordingly if successful", but it might have such a feature...I know for a fact setting both my routers to bridge mode and letting my server do the PPPoE handshake would work best and avoiding having to do this but at least one of my routers doesn't support bridge mode.FWIW that's not a guarantee either. Just because the PPPoE session comes up doesn't mean that the (whole) Internet is reachable through that path. There might be a problem further along in the ISP network.
I have 2 different home internet connections and I have an IP from each router's LAN (such as 10.0.0.2/24 and 10.0.1.2/24 with each ISP's routers being the .1) configured on my private server, which then distributes internet access to my network. The problem is when one of my ISPs goes down I have to manually change my default gateway from 10.0.0.1 to 10.0.1.1 because those IPs are set in each router and the routers are not in bridge mode. Since the downtime has nothing to do with the routers at my home there's no way my server can know the routing table is down because the router is still answering ICMP packets. The question is, does the Linux kernel support setting to check an IP through a gateway and using the state of that route to also change my gateways? For example, if I were to set 8.8.8.8 to always go through 10.0.0.1 and if that IP stopped responding then bring down the 10.0.0.1 default route as well, that'd work for me. I know for a fact setting both my routers to bridge mode and letting my server do the PPPoE handshake would work best and avoiding having to do this but at least one of my routers doesn't support bridge mode.
Does the Linux kernel support changing gateways based on state of IP outside your network?
It's a multipath route: both gateways are used, one has no precedence over the other, but for a specific destination (and other factors), the same gateway will be used to avoid disturbing flows using it. So if very few destinations are used or tested, one gateway might appear to be favored over the other. A multipath route can be set with the "simple" syntax using directly ip route add ... nexthop ... nexthop ... or the newer and more featureful syntax using ip nexthop add id XXX ... and ip route add ... nhid XXX. Here, route nhid 36 selects nexthop id 36 which is a nexthop group of id 31 and id 37. They have equal participation in the group (because no specific weight was set). An algorithm selects which gateway is used for a specific destination: the default is the hash-threshold algorithm, as mentioned in the documentation for the alternate (resilient) algorithm and RFC 2992. This algorithm ensures that on average both gateway will be used, but for a specific destination always the same is used. One can verify this by comparing routes for multiple different destination addresses. For example, with a mock-up configuration mimicking OP's default route, a loop (with bash and jq) gave this: # for i in 2001:db8::{{0..9},{a..f}}; do ip -6 -json route get $i; done | jq -j '.[] | .dst, " via ", .gateway, "\n"' 2001:db8:: via 2602:fbbc:1:1::1 2001:db8::1 via 2602:fbbc:1:1::2 2001:db8::2 via 2602:fbbc:1:1::2 2001:db8::3 via 2602:fbbc:1:1::2 2001:db8::4 via 2602:fbbc:1:1::2 2001:db8::5 via 2602:fbbc:1:1::1 2001:db8::6 via 2602:fbbc:1:1::1 2001:db8::7 via 2602:fbbc:1:1::1 2001:db8::8 via 2602:fbbc:1:1::2 2001:db8::9 via 2602:fbbc:1:1::1 2001:db8::a via 2602:fbbc:1:1::1 2001:db8::b via 2602:fbbc:1:1::1 2001:db8::c via 2602:fbbc:1:1::2 2001:db8::d via 2602:fbbc:1:1::2 2001:db8::e via 2602:fbbc:1:1::2 2001:db8::f via 2602:fbbc:1:1::1Result on an other system might differ, but overall both gateways will be used evenly, with only one of them per destination to minimize flow disruption (eg: a firewall in the path should see the whole flow rather than only a part of it). The hash is actually not based only on destination, but might use source, protocol and probably other properties (eg: adding ipproto tcp to the ip route get command above changes the result, choosing udp or ipv6-icmp instead of tcp changes it again).
I'm running BGP using FRR on Debian Linux on several machines. My question might end up having to do with something in the FRR/BGP configuration but I'm trying to understand at a more basic level why a particular IPv6 route selection is happening (from the Linux kernel). I have a machine "a3" which is peered with "a1" and "a2". "a1" and "a2" are route reflectors and are both providing a default gateway to a3. Here you can see a3's IPv6 routing table: root@a3:~# ip -6 route ::1 dev lo proto kernel metric 256 pref medium 2602:fbbc:0:2::/64 dev vxbr2 proto kernel metric 256 pref medium 2602:fbbc:0:65::/64 dev vxbr101 proto kernel metric 256 pref medium 2602:fbbc:1:1::/64 dev 000_bridge proto kernel metric 256 pref medium fe80::/64 dev 000_bridge proto kernel metric 256 pref medium fe80::/64 dev vnet7 proto kernel metric 256 pref medium fe80::/64 dev vxbr101 proto kernel metric 256 pref medium fe80::/64 dev vxbr2 proto kernel metric 256 pref medium fe80::/64 dev vnet40 proto kernel metric 256 pref medium fe80::/64 dev vnet43 proto kernel metric 256 pref medium fe80::/64 dev vnet46 proto kernel metric 256 pref medium fe80::/64 dev vnet47 proto kernel metric 256 pref medium fe80::/64 dev vnet54 proto kernel metric 256 pref medium fe80::/64 dev vnet57 proto kernel metric 256 pref medium fe80::/64 dev vnet58 proto kernel metric 256 pref medium fe80::/64 dev vnet63 proto kernel metric 256 pref medium fe80::/64 dev 001_bridge proto kernel metric 256 pref medium default nhid 36 proto bgp metric 20 pref medium nexthop via 2602:fbbc:1:1::1 dev 000_bridge weight 1 nexthop via 2602:fbbc:1:1::2 dev 000_bridge weight 1As I understand it, the line near the bottom reading default nhid 36 proto bgp metric 20 pref medium is indicating that the nexthop entry numbered 36 is being used as the default route, which contains two other separate entries, one for 2602:fbbc:1:1::1 and one for 2602:fbbc:1:1::2. Here's the nexthop table: root@a3:~# ip nexthop id 15 dev 001_bridge scope host proto zebra id 16 dev 000_bridge scope link proto zebra id 26 dev vxbr2 scope link proto zebra id 27 dev vxbr101 scope link proto zebra id 31 via 2602:fbbc:1:1::1 dev 000_bridge scope link proto zebra id 32 via 10.1.0.1 dev 001_bridge scope link proto zebra id 36 group 31/37 proto zebra id 37 via 2602:fbbc:1:1::2 dev 000_bridge scope link proto zebraSo I would think, due to the sequence here (it is earlier in the nexthop list, lowered numbered and first in the sequence of id 36 group 31/37 proto zebra) that 2602:fbbc:1:1::1 would be selected as the default gateway, but this is not the case. Looking up any random public IPv6 address gives: root@a3:~# ip -6 route get 2001:4860:4860::8888 2001:4860:4860::8888 from :: via 2602:fbbc:1:1::2 dev 000_bridge proto bgp src 2602:fbbc:1:1::a3 metric 20 pref mediumAnd I can confirm via traceroute6 and any other tools available that 2602:fbbc:1:1::2 is definitely being selected as the gateway, not 2602:fbbc:1:1::1. And I have no idea why. Also, ip -6 route show cache gives no output, and ip -6 route flush cache has no effect, so it doesn't seem to be route cache related. There do not appear to be any custom rules configured either: root@a3:~# ip -6 rule show 0: from all lookup local 32766: from all lookup mainI'm sure I will have more to tweak on the BGP configuration to resolve this but just from the perspective of how the route selection is done in Linux, does anyone have an idea on what could be causing this? (And any ideas on what parameter could be tuned to fix it?)
Unexpected Route Selection
So, I said the next step was to write scripts. Well, here they are. To explain, the solution I've come up with has two main parts. A template for the dhcpd.conf file. And a script to query the needed data from dhcpcd, parse it, apply it to the template, save the result as /var/local/dhcpd6-lan.conf, then restart dhcpd to use the new settings. The script makes use of dhcpcd's run-hooks feature. Basically, when dhcpcd does anything, including receiving responses from upstream, it runs /etc/dhcpcd.exit-hook with various environment variables set to the values of relevant dhcp configuration options. I've simply written a hook for the DELEGATED6 action which fires when dhcpcd assigns an ip from a ipv6 prefix responce. I had to implement my own logging because dhcpcd-run-hooks appears to squelch all stderr and stdout from hook scripts. I do not like this solution. I will not be marking it as the accepted solution. I've put much effort into making it as rugged as possible but it still feels like too many points of potential failure. For now, it's getting the job done. I still feel like there has to be a better way. UPDATE - 8 Months Later: Well, looks like I was wrong. The scripts I created have proven remarkably reliable. Eight months and not a single hiccup. The scripts have also grown significantly more robust with some potential corner cases cut out, the ability to update many different configuration files, and a simple json state file for tracking the current state. Given the reliability and robustness of this solution at this point, I'm going to go ahead and accept this answer. I may look into improvements with tools like envsubst in the future.
Context I'm currently neck deep in building an internet gateway out of an old fanless, headless Intel Atom/ITX computer I had laying around. These are my requirements:have an ipv4 and ipv6 ip provided by my ISP's DCHP assigned to the internet-facing interface recieve an ipv6 prefix provided by my ISP's DHCP. have a static, private IPv4 ip facing the lan. have IPv4 DHCP server on lan-facing interface. have a IPv6 ip set as <prefix>::1/64 on the lan-facing interface have a DHCPv6 server providing stateful assignment of addresses within the prefix provided by my ISP to LAN clients. must be resilient to disconnects and reconnects on both lan and wan facing interfaces. must function as a network appliance: no maintenance beyond security updates.I want to use Stateful DHCPv6 instead of Stateless DHCP or SLAAC because I will be setting up DDNS managed by my new gateway as well as radius and a few other odds an ends... some of which will be used to determine what ip clients end up with. I currently have everything working on the ipv4 side. Like clockwork. The gateway itself has a fully functioning dual-stack connection to the internet and can access resources both via ipv4 and ipv6. I've also implemented a netfilter based firewall for both ipv4 and ipv6. I've even got the lan side assigned a static-private ipv4 address and a <prefix>::1/64 address. And I can provide clients on my lan with an ipv4 address, dns, domain, gateway and all the rest via DHCP. Resilience to disconnects and reconnects is provided by ifplugd. The Problem What I can't do is provide Stateful IPv6 addresses in the range of <prefix>::0/64 to clients via DHCP. I'm stuck with the reality that dhcpd needs me to set a static name server, static gateway, and static prefix in it's config file... yet all of those are dynamically assigned based on the prefix given by my ISP. I've been through the dhcpd.conf manpage a number of times now and I don't see anything offering a way to assign these dynamically. That dhcpcd stores it's lease data in binary format doesn't help matters. I've found a workable way to query dhcpcd for it's lease data, so that's not a problem anymore. My next step is to write I've writen some scripts/systemd units to manage querying the data I need from dhcpcd and (re)starting dhcpd with the appropriate flags (will add as an answer soon). But this is clunky and I fear all the ways it could quirk out on what is meant to be a plug-and-go unit. If I have to ssh into my gateway later to fix something... it means I've failed. My Questions:Am I just missing the obvious here? If so, what am I missing? If I am after another 48 hours of digging through man pages and RFC documents... then it's just going to go right on being missed. Can WIDE or another all-in-one DCHP client/server provide for my lofty goals (radius, server managed DDNS, etc)? Can I use a link-local or private ip for the gateway ip in a different subnet? Like... can fd41:2a0d:e8e4:0::1 be the entry sent as the router option for a subnet of 26AA:A4A4:300:22AF::/64 if all clients have ips for both that and the fd41:2a0d:e8e4:0::/64 subnet? I've read that using the link-local ip of the server is prefered over the globally routable one specifically because of my issue... but the idea of setting a gateway ip outside the subnet it's for just seems wrong. Yes... but it doesn't fully solve my problem.P.S. Before anyone asks, I started off trying to use dhclient on the wan side (sticking with ISC tools for dns/dhcp), but it didn't want to resolve ipv6 and ipv4 on the same interface and wouldn't let me query an v6 ip and a v6 prefix at the same time. Probably my fault... but I gave up and switched to dhcpcd as a result. Config Files radvd.conf: interface lan { AdvManagedFlag on; AdvSendAdvert on; #AdvAutonomous off; AdvOtherConfigFlag on; IgnoreIfMissing on; AdvDefaultPreference high; MaxRtrAdvInterval 60; };dhcpcd.conf: hostname duid persistent option rapid_commit option classless_static_routes option interface_mtu require dhcp_server_identifier noipv6rs waitip 6 waitip 4 denyinterfaces laninterface wan ipv4 ipv6 dhcp dhcp6 ipv6rs ia_na 1 #ia_pd 2 lan ia_pd 2/::/64 lan/0/64
DHCP-PD on wan side with Stateful DHCPv6 assigned globally routable ips on lan side. Need to dynamically configure DHCPv6 on lan side
Similar to this question : https://askubuntu.com/questions/948453/internet-connection-stops-after-one-interface-is-down-in-ubuntu-server I will use NetworkManager to manage my connections (3G too). However, when the internet access is lost on the interface that has the lowest metric, the system cannot access to internet anymore whereas an interface having a higher metric but connected to internet exist (e.g. a 3G modem), in that case we do need to manually increase the metric of the interface not connected to internet. I will see if I can request a new system image with the interface bonding built in it. Otherwise, one can write a service to monitor if the default route with the lowest metric is connected to internet and can control the NetworkManager.
I have an embedded system running Ubuntu 18.04 LTS that has a 3G modem and an Ethernet interface (eth0). Both of these interfaces have access to Internet. When the internet is unavailable on the Ethernet interface (cable is unplugged), I want to set the default gateway to the one of the 3G modem automatically, so the system can always access the internet. For now, for test purposes and for the sake of simplicity, I'm substituting the 3G modem with another ethernet interface (a USB Ethernet adapter connected to another network - interface enxd037458b96e3) and I noticed that when the Ethernet connection is lost on eth0 (its cable is connected to a 4 port gigabit router) the default gateway disappears and I don't have access to internet whereas the USB Ethernet interface is active (and its IP address is automatically assigned with DHCP just like eth0). In that case, this is the output of the route command : $ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 uap0 192.168.137.0 0.0.0.0 255.255.255.0 U 0 0 0 enxd037458b96e3Whereas when the eth0 is up or recoverd : $ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.30.102 0.0.0.0 UG 0 0 0 eth0 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 uap0 192.168.30.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.137.0 0.0.0.0 255.255.255.0 U 0 0 0 enxd037458b96e3Below, the content of this /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) # Include files from /etc/network/interfaces.d: source-directory /etc/network/interfaces.d auto lo iface lo inet loopbackiface eth0 inet dhcp # post-up route add default via [gateway-ip-address] dev eth0# interface usb eth allow-hotplug enxd037458b96e3 iface enxd037458b96e3 inet dhcp# auto wlan0 allow-hotplug wlan0 iface wlan0 inet dhcp wpa-conf /etc/wpa_supplicant/wpa_supplicant.confauto uap0 # allow-hotplug uap0 iface uap0 inet static address 192.168.2.10 netmask 255.255.255.0 broadcast 192.168.2.255 post-up /etc/rc.apstart || true # post-up /bin/uaputl.exe sys_cfg_80211d country DE || true # post-up /bin/uaputl.exe sys_config /etc/uapTest.conf || true # post-up /bin/uaputl.exe bss_start || true post-down /bin/uaputl.exe bss_stop post-down /bin/uaputl.exe sys_reset # post-up /sbin/ifconfig uap0 192.168.0.2 UPDATE: Unlike the eth0 interface, when I unplug the cable from USB Ethernet adapter, route always shows the default gateway of this last. With the eth0 interface, the default gateway disappears and will be back only if the cable is plugged on it. $ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.137.1 0.0.0.0 UG 0 0 0 enxd037458b96e3 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 uap0 192.168.30.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.137.0 0.0.0.0 255.255.255.0 U 0 0 0 enxd037458b96e3
Use the internet access of another network interface when the main one is down
Some modem cards are doing Proxy ARP. That means you can tell the default route is through the card without gateway and your system will start issuing an ARP request for any IP (eg ARP for 8.8.8.8 following a ping 8.8.8.8), since the card looks like ethernet, as if the whole Internet was on the LAN. If the card is doing proxy ARP, this will work. Example with a card named wwan0: ip route add default dev wwan0If it's not doing proxy ARP, nothing much will happen beside a timeout after 3s with the message "Destination Host Unreachable" for any Internet IP. You have to test and see what's the result.
I know how to assign a default gw to an interface with an IP ip route add default via <host> dev <dev> # e.g. ip route add default via 192.168.0.101 dev eth0The problem is that the IP of eht0 in my scenario is externally managed. Therefore the previous command will not work if the IP of eth0 is changed. Is there a simple way to assign an interface as the default gw, independently of the IP it has?Note: The interface is not UP when booting the machine. Note 2: My interface is a 3g modem, therefore I also DONT KNOW the gateway IP before I make a petition to connect.
Define network interface as the default gw independently of IP
You see the route is added by connmand (ConnMan daemon). It's not related to the normal interfaces settings but a separate config. If You cannot disable it completely (don't know whether it's needed for Your BBB what ever the hell that is) You have to look at that's configuration. If You would post that configuration and tell for which job connman is needed someone could assist You further.The solution in this case was simply disabling connman to handle eth0 by changing the last line of /etc/connman/main.conf to NetworkInterfaceBlacklist=eth0,SoftAp0,usb0,usb1. That changed the output of route to: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.5.254 0.0.0.0 UG 0 0 0 eth0 192.168.5.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0With that everything seems to work fine.
I’m using a Beaglebone Black running a webserver on a Debian system. The BBB is working as a DHCP + DNS (using dnsmasq) in a local network (192.168.5.xyz) with no direct internet access. I can easily connect devices that retrieve an IP from the BBB. So far so good. In case I’m at home for example, I’d like to add internet access to this little network. So I connect this network to a router that provides internet access and has a static IP address (192.168.5.254) within this network. So I added the router’s IP to the /etc/network/interfaces file: /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback# The primary network interface #auto eth0 #iface eth0 inet dhcpallow-hotplug eth0 iface eth0 inet static address 192.168.5.1 netmask 255.255.255.0 gateway 192.168.5.254But for some reason an extra default routing entry is added whenever I reboot my BBB. When I manually delete/flush the default entry with GW 0.0.0.0 everything works fine. Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 0.0.0.0 0.0.0.0 U 0 0 0 eth0 default 192.168.5.254 0.0.0.0 UG 0 0 0 eth0 link-local 0.0.0.0 255.255.0.0 U 0 0 0 eth0 192.168.5.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0It seems that the unwanted default gateway is added during boot: journalctl -b: … Nov 06 11:29:40 webserver connmand[1827]: eth0 {add} address 192.168.5.1/24 label eth0 family 2 Nov 06 11:29:40 webserver avahi-daemon[1792]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.5.1. Nov 06 11:29:40 webserver connmand[1827]: eth0 {add} route 192.168.5.0 gw 0.0.0.0 scope 253 <LINK> Nov 06 11:29:40 webserver avahi-daemon[1792]: New relevant interface eth0.IPv4 for mDNS. Nov 06 11:29:40 webserver avahi-daemon[1792]: Registering new address record for 192.168.5.1 on eth0.IPv4. Nov 06 11:29:40 webserver connmand[1827]: eth0 {add} route 0.0.0.0 gw 192.168.5.254 scope 0 <UNIVERSE> …I can also the the "wanted" routings I made in /etc/network/interfaces. These are also made by the Connman Deamon. But the /etc/connman/main.conf is apparently not the file that is causing the default route with gateway 0.0.0.0: [General] PreferredTechnologies=ethernet,wifi SingleConnectedTechnology=false AllowHostnameUpdates=false PersistentTetheringMode=true NetworkInterfaceBlacklist=SoftAp0,usb0,usb1Do you have any hints how to find out where the extra route is added and how to prevent it? I've already looked through several sripts that are called during boot but couldn't find it... Or is the way I'm setting up eth0 completely wrong?
How to prevent an unwanted default gateway to be added during reboot
A gateway would need to be configured in your interfaces file; e.g., something like iface wlan0 inet static address 192.168.x.y gateway 192.168.x.z netmask 255.255.255.0would work (where x is your network number, y the address for your host, and z the address for your gateway). Obviously you need to retain your encryption settings, too. If you're using dhcp on that interface, then something is wrong with your dhcp server. EDIT: you should also make sure no other network interface has a gateway setting, or if it does, that the gateway setting on that interface is correct. A "gateway" or "default gateway" is a machine which offers a connection to the Internet. It is a valid configuration to have a network interface without a gateway line if no such host exists on that network connection. In your case, assuming there is no internet router on the network that eth0 is linked to, you should ensure that the iface eth0 stanza looks like this: iface eth0 inet static address 192.168.1.115 netmask 255.255.255.0i.e., what you already have, but without the gateway 192.168.1.1 line. (the indentation at the start of the line is optional, but does make the file easier to read).
I am using a private hotspot to connect a Raspberry Py to internet. I've setup the password and the ssid in the /etc/network/interfaces file. With this configuration I'm able to connect to the wifi but I can't connect to internet. pi@tenzo /etc $ ping google.com PING google.com (173.194.40.2) 56(84) bytes of data. From tenzo.local (192.168.1.115) icmp_seq=1 Destination Host UnreachableI've asked around and they said it's a gateway issue. Running traceroute from a laptop connected to the same network I get: userk@dopamine:~$ traceroute google.com traceroute to google.com (216.58.212.110), 30 hops max, 60 byte packets 1 192.168.43.1 (192.168.43.1) 2.423 ms 5.088 ms 5.084 ms 2 * * * 3 10.4.129.165 (10.4.129.165) 120.018 ms 120.027 ms 120.020 ms 4 10.4.129.196 (10.4.129.196) 129.488 ms 129.490 ms 129.471 ms 5 10.4.129.196 (10.4.129.196) 138.994 ms 141.969 ms 144.439 msDo you have any advice? EDIT 1 I've added to the interfaces the gateway, address and netmask. SEE EDIT 2 Now, when I ping google.com I get the same error as before... This is the output of route -n pi@tenzo ~ $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 192.168.43.1 0.0.0.0 UG 303 0 0 wlan0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.43.0 0.0.0.0 255.255.255.0 U 303 0 0 wlan0EDIT 2 This is my interfaces file: auto lo iface lo inet loopbackauto eth0 allow-hotplug eth0 iface eth0 inet staticaddress 192.168.1.115 netmask 255.255.255.0 gateway 192.168.1.1auto wlan0 allow-hotplug wlan0 iface wlan0 inet static address 192.168.43.235 netmask 255.255.255.0 gateway 192.168.43.1 wpa-ssid "UserKOnTheNet" wpa-psk "xxxxx"This is the output of traceroute pi@tenzo ~ $ traceroute google.com traceroute to google.com (173.194.40.7), 30 hops max, 60 byte packets 1 tenzo.local (192.168.1.115) 2995.172 ms !H 2995.058 ms !H 2995.016 ms !H
How to set up the gateway for wlan0?
Use "option dhcp-parameter-request-list" to limit the parameters that the DHCP server can return to the client. see man page for dhcp-options (5). In that innermost group add something like: option dhcp-parameter-request-list 1,2,6,12,15,42,51,53,54,61,119;Note that the routers option has a code number of 3, so we are leaving that one out of the allowed list. See RFC-2132 for the list of DHCP options and their codes.
Given the below configuration, how can I undo the routers option for the inner group so that certain machines don't get told what the default gateway is from the outer group? group { # A bunch of options go here which should apply to all clients. option domain-name-servers ... option routers 192.168.0.1; # This host should be told about the default gateway. This is just an example, # there are many more. host example1 { } group { # This group should have no default gateway supplied by the DHCP server, # but otherwise inherit all the options from the parent group. option routers 0.0.0.0; host example2 { } } # There are many other groups that I have omitted for simplicity. group { } }In my case I am using some IP cameras I bought from China, and they phone home (actually to servers located in the US) but nonetheless there is no way to disable this on the devices themselves. I have blocked the traffic at my firewall but I would prefer that it doesn't even get that far, by telling the camera there is no default gateway available, so it doesn't know where to send packets destined for outside my network. When I use the above config, the 0.0.0.0 option is ignored by the ISC DHCP server and it still gives out the gateway inherited from the parent group. Is there a way to completely override the routers option from the parent group { }?
How to override isc-dhcpd option in child group, to hide default gateway
The value of Gateway for default destination coming as "gateway" because route command is resolving the name of the IP, so if you want to show the IP use route -n that tell it to not resolve the names. look to the man page of route for more info. For the second question, one raw for the subnet of the network which is 10.0.2.0/24 and the other one is for the default gateway, so in this case the gateway is in the subnet 10.0.2.0/24, and it's possible to have more than two raws that have eth0 in the Iface field like when you have aliases on that interface so it's not a strange case
The output of the route command is coming as below. This is in a virtual machine. My question is that why the value of Gateway for default destination coming as "gateway" and not as the IP Address of my router?? Also why are there two rows for eth0?
Default value of Gateway in route o/p
The syntax I was using was for a newer version of PF, and in order to fix it I just upgraded to the latest version of OpenBSD.
I am running OpenBSD 4.4, I copied the template from https://markshroyer.com/guides/router/ch07.html for /etc/pf.conf and edited it to match my network. When I start it I get syntax errors on only three lines that I didn't need to edit: match on $if_wan scrub (reassemble tcp random-id max-mss 1440), match out on $if_wan from $net_private to !(if_wan) nat-to ($if_wan) which I tried editing to match out on $if_wan from $net_private to !($if_wan) nat-to ($if_wan), and pass in on $if_lan net photo tcp to port ftp rdr-to 127.0.0.1 port 8021 These are the only lines that claim a syntax error. Anyone know what the problem is and how to fix it?
Configuring OpenBSD's PF as a router
from my understanding you are only able to have one default route which it seems debian will take the first interface configured with a gateway as the default route. in your case eth0, if you are attempting to route to a different subnet within your local network. Your system is on 192.168.1.0/24 network but you would require a static route only to route 192.168.0.0/24 traffic though the local network, if you where to set this as the default gateway this will route all traffic though this gateway which I assume has no other route to the internet. to add the static route to this second network you could add a new line to your interface file with up route add -net 192.168.0.0/24 gw 192.168.1.16 dev eth1 This should make a static route that will persist though a reboot.
I'm using the following /etc/network/interfaces configuration on a Debian machine: auto lo iface lo inet loopbackauto eth0 iface eth0 inet static address 10.0.0.5/24 gateway 10.0.0.1auto eth1 iface eth1 inet static address 192.168.1.5/24 gateway 192.168.1.16The output of route shows that the very last line of the configuration is ignored: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default _gateway 0.0.0.0 UG 0 0 0 eth0 link-local 0.0.0.0 255.255.255.0 U 1000 0 0 eth0 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1Once I run route add -net 192.168.0.0/24 gw 192.168.1.16, then the route shows the expected gateway: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default _gateway 0.0.0.0 UG 0 0 0 eth0 link-local 0.0.0.0 255.255.255.0 U 1000 0 0 eth0 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.0.0 192.168.1.16 255.255.255.0 UG 0 0 0 eth1 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1Why is gateway statement not applied automatically? What am I missing?
Isn't the gateway statement supposed to work for a subnet?
Well, it seems gateway (also known as default gateway) is something different from what you think it is, since the way you have it configured looks incorrect to me. The default gateway defines how the machine should try to reach an IP in a network it doesn't know about, which is not in any of the networks directly attached to this machine, or networks for which the machine has static routes configured to. In short, the default gateway is the way by which the machine can reach the Internet. In particular, you most typically don't have default gateways in multiple interfaces (since typically only one interface goes to the Internet, the others go to internal networks.) So I'd expect to see a default gateway configured on either eth0 or eth1, but not both... Furthermore, the default gateway typically should be configured in the interface where that IP belongs (since you want it to be configured as that interface is brought up.) So I'd expect GATEWAY=192.168.60.60 to be configured in the eth0 configuration, since that interface handles the 192.168.60.x network and it's the one where IP 192.168.60.60 is actually reachable. (Of course, that's assuming 192.168.60.60 is actually the default gateway through which you can reach the Internet, otherwise you shouldn't list it at all.) IP 192.168.50.55 looks problematic, since it's not an IP on either one of the two configured networks (192.168.60.x on eth0 or 192.168.110.x on eth1), so configuring such a default gateway will plainly not work, since it's not attached to any of the known networks, so your machine doesn't know how to reach it. If you configure default gateways in the wrong places and configure many of them, it's quite possible that the network scripts will still configure them both and you might end up having one, the other or maybe both listed, resulting into a configuration that works, or doesn't work, or work sometimes. So what you report about settings getting mixed up when bonding gets involved doesn't surprise me. My advice here is that you try to understand how default gateways work, reconfigure your files to only list the correct one in the correct place, retest it, then go back to setting up bonding on your VLAN 14. If you have follow up questions, this site can be a good resource. In that case, you might want to further describe your network, the IP ranges and how it's connected to the Internet, you might get more specific recommendations then.
I have a physical system with Centos 6 as OS. In eth0 I set IP, gateway and netmask as below and the physical port is attached to a switch port that its VLAN is 12. DEVICE=eth0 TYPE=Ethernet UUID=20b60816-f5eb2e4 ONBOOT=yes BOOTPROTO=static IPADDR=192.168.60.2 GATEWAY=192.168.50.55 NETMASK=255.255.255.0 and in eth1 i set these and physical port is attached to switch port with VLAN 14. DEVICE=eth1 TYPE=Ethernet UUID=9de7-14f13f5eb2e4 ONBOOT=yes BOOTPROTO=static IPADDR=192.168.110.5 GATEWAY=192.168.60.60 NETMASK=255.255.255.224 the default port is set on eth1 so when i run route the gateway is 192.168.60.60. but when i bond eth1 and eth2 ( eth2 VLAN is also 14) and the default port is still eth1 and i run route the gateway is 192.168.50.55! why this happen and what should I do? UPDATE: based on answer below i found these documents. Centos documentation redhat documentation
system get wrong gateway when bonding
You misunderstand the meaning of a gateway. You need a gateway for reaching an IP address which is not link-local. 10.2.2.0/255.255.255.0 is link-local thus you do not need a gateway for it. This entry does not show which gateway is used but which interface is used for this subnet.
when i cat the file /etc/sysconfig/network-scripts/ifcfg-ens192a line showing the gateway for ens192: GATEWAY=10.2.2.2but when i run netstat -r or route Destination Gateway Genmask Flags MSS Window irtt Iface default gateway 0.0.0.0 UG 0 0 0 ens192 10.2.2.0 0.0.0.0 255.255.255.0 U 0 0 0 ens192 10.2.3.0 0.0.0.0 255.255.255.0 U 0 0 0 ens256Gateway seems to be 0.0.0.0 what is the difference between the two gateway? i am so confused..
What is difference between the gateway in ifcfg-ens192 and the one print by route
If you aren't too exact with your 1 second timeframe, you can use an infinite loop in a shell script: #!/bin/sh while true; do ping -i 1 192.0.2.0 doneShould ping exit, the loop starts it again. -i option sets the interval between pings.
I want to set up a continuous task that pings a gateway every second. How would I go about it? The most performance friendly solution would be best. No output needed, I just want it to ping.
How to set up a ping task on Antergos/Arch Linux?
You might want to checkout Shorewall, is a tool for configuring iptables. Its really easy and powerful. Shorewall is in debian repos, you can install it with apt-get. And theres some preconfigured files for the two interfaces setup in: /usr/share/doc/shorewall/examplesSome useful documentation: http://www.shorewall.net/two-interface.htm https://wiki.debian.org/HowTo/shorewall
I want to use a dedicated Debian 7 machine as a gateway for a LAN in my home. My machine has two network cards eth1 (external Internet) and eth2 internal LAN. This eth2 is the one which has an IP address like 192.3.1.1 and is used as the default gateway for your internal machines. I saw this article but the script is outdated for Debian 7. How can I setting up a simple Debian 7 as a gateway?
How to setting up a simple Debian 7 as a gateway?
The next steps to take might be as simple as adding one policy rule and one routing table entry. You can try: ip route add default via 172.17.100.1 table 1000 ip rule add type unicast iif eno2 table 1000 pref 30000I'm assuming that 172.17.100.1 is the other end of your VPN tunnel. I used the numerical value of 1000 for the table, but since you added an entry with name "vpntunnel", you could use the name. Your VPN might already have some configuration setup that could conflict with what I'm suggesting. The outputs of "ip rule show" and "ip route show table all" could be useful in the question. Also, existing iptables configuration could also require a change to the suggested solution.
I have a Debian Linux VPN router myvpnserver with 2 interfaces, eno1 and eno2:eno1 is connected to a LAN and an internet router. On this interface with static IP address, myvpnserver has its default gateway (to the Internet). OpenVPN connects to a VPN server using this internet connection. eno2 is connected to a switch. A DHCP server runs on this interface. I want all the traffic from clients connected to eno2 to be routed through the VPN / tun0.The basic setup works fine. From hosts attached to eno2 I can reach hosts in the remote VPN LAN (e.g. 10.123.0.0/24). My next goal is routing on myvpnserver depending on the source address or interface. If myvpnserver connects to the Internet (e.g. ftp.debian.org or VPN host), it should use the default gateway via eno1. If a client attached to eno2 wants to connect to the same Internet hosts (e.g. ftp.debian.org), the traffic should be routed through the VPN / tun0 instead of myvpnserver 's default gateway. // For Incoming Traffic: If( InputInterface = eno2 ) Then default_gateway = 172.17.100.1 Else default_gateway = gateway as declared in /etc/network/interfaces End IfI found out policy-based routing seems to be the way to go. While "normal" routing is based on the destination only, policy-based routing is said to be able to consider additional aspects like the input interface or source IP range. What are the steps to take? (I'm on Debian 11 / Bullseye.) 1.) I added a row 1000 vpntunnel to /etc/iproute2/rt_tables. What's next? Can you point me to a configuration example? Thanks a lot for any advice!
Route all traffic from one interface (default gateway) through OpenVPN / tun0 - policy based routing, dependent on source address/interface
If you have access to another host on the same network segment, look at its TCP settings. Chances are, the same Default Gateway will work for another host on the same segment. Do not assign the same IP address. If the IP address of the RHEL7 host was set via DHCP and it was assigned a different (presumably incorrect) Default Gateway, this is something you should bring to the attention of your network administrator. It is entirely possible, however, that the MAC address of the host in question was also specifically identified by said administrator and given a specific entry in the DHCP server settings to assign "abnormal" configurations to this host, for reasons likely beyond the scope of this question. If the IP address of the host was set manually, by hand, you should still consult the network administrator to ensure that you do not (or did not) select an IP address which is already in use by or assigned to another device which connects to the same network (or is in the DHCP pool and may thereby be assigned to another device on the network).
I have a linux (RHEL 7) server on a network and the default gateway is set incorrectly. How can I find out what the default gateway should be, without asking a Network Administrator?
How can I find out what my default gateway should be?
The tunnel between the gateways needs to be a common network. One way would be to set its network mask to 255.255.0.0, so that all 192.168.x.x addresses are on the same network. If you want to keep the 255.255.255.0 network mask, both gateways need to have 192.168.10.x (or 192.168.20.x) addresses.
I get this error while configuring a network in Virtual Box with Linux. I have 2 gateways (192.168.10.5 and 192.168.20.5) that are connected to 2 hosts. The first first host has address 10.0.10.100 and is connected to the gw via 10.0.10 1 while the other has 10 0 20 100 and is connected to the other gw via 10.0.20.1. The gw are connected via host only network adapters. I have configured the hosts as gw with ip forward. I make 10.0.10.1 and 10.0.20.1 default gateway for the two hosts. Then my idea was to route add (from 192.168.10.5) net 10.0.20.0 netmask 255.255.255.0 gw 192.168.20.5 but I got that error. Can not understand why. Do you have any solution?
SIOCADDRT error: no such process
Turns out I had concurrent 10.0.0.0/8 network bridged with my other 10.0.0.0/8 network.. So when the Drops were actually happening it was picking up packets from the other network which is different and the router has no way to control it.
Router1 is gateway for another router(2). Router1 has set 10.0.0.1/8 lan network to which router2 is attached. 10.0.0.1 is obviously the bridge ip address of lan as well as assigned gateway ip for router2. Router1 has the following rule that successfully blocks every try from router2 to reach destination different than 10.0.0.1 (router1 ip itself..) , but unfortunately only if router2 is using the 10.0.0.1 as gateway. iptables -t raw -I PREROUTING ! -d 10.0.0.1 -j DROPThe problem is: If I change the gateway of router2 from 10.0.0.1 to 10.22.22.1 and use dhcp to obtain ip address or manually set 10.22.22.22/24 (for example) - router2 is able to access the internet from router1 !? This is strange for me because the above rule is very clear. What rule should I apply to block router2 internet, and allow access to gateway only 10.0.0.1/8 ? (The /8 network is absolutely needed for lan, also I need raw table commands only).
Unable to block traffic with iptables
The comments to my question suggest to disable the first DHCP server which is proably a valid thing to do. However, I do not seem to have any troubles so far running two servers, because the second one is non-authorative and the ranges are separated. He seems to hand out two IPs so far to my two test clients. (I might edit my answer if I recognise issues.)Also: Switching my ISP router's local DHCPv4 server to off just seems to turn it non-authorative since I still can edit static IPs for MAC addresses in the GUI after switching off. So local seems to stand for "the locally responsible".Even in the case I could disable the first DHCP server I still would face this: My clients did not receive a route to the tunnel from my own DHCP server. This can be accomplished by configuring the DHCP server to hand out static routes. Caution: It will then not hand out the default gateway any more, unless you define it as another static route: https://ral-arturo.org/2018/09/12/dhcp-static-route.html So the solution to my problem is not to setup a second (default) gateway, but to configure DHCP so it will hand out the routes that my clients need. My /etc/dhcp/dhcpd.conf looks like this: #authoritative; default-lease-time 86400; max-lease-time 86400; option rfc3442-classless-static-routes code 121 = array of integer 8; option ms-classless-static-routes code 249 = array of integer 8;subnet 192.168.111.0 netmask 255.255.255.0 { range 192.168.111.223 192.168.111.254; option routers 192.168.111.1; #deny unkown-clients; option domain-name-servers 192.168.111.1; option domain-name "local"; option rfc3442-classless-static-routes 24, 192, 168, 1, 192, 168, 111, 222; option ms-classless-static-routes 24, 192, 168, 1, 192, 168, 111, 222; }host squeezeboxtest { hardware ethernet 00:04:20:5f:55:8e; fixed-address 192.168.111.231; option host-name "squeezeboxtest"; }host asusklein { hardware ethernet 04:e6:76:5d:cf:a6; fixed-address 192.168.111.232; option host-name "asusklein"; }host HAPZE { hardware ethernet fc:f1:52:fc:a6:60; fixed-address 192.168.111.21; option host-name "Sony HAP-ZE"; }The server can be restarted with sudo systemctl restart isc-dhcp-serverJust as a note: It is possible to restrict clients from getting an IP from the second DHCP server by adding the line deny unkown-clients;.
I need to add a static route to my internet service provider's router. Unfortunately, this router does not provide such a modification option for an end-user. The reason I need the static route is so that clients in that LAN will know where to send packets for a remote LAN which is connected via wireguard. So my solution was to setup a second DHCP server on the Raspberry Pi that is providing the wireguard tunnel. I make the DHCP server non-authorative and add some hard coded MAC addresses to the configuration so it will only give out IPs for those clients. Now, if a client get's its IP from this second DHCP server it can also get the default gateway from it. I can set this up for the DHCP server. Would it be correct to set this Raspberry Pi as default gateway instead of the ISPs router? (This will only affect clients that get the DHCP from the Pi.) I could then add a route for the specific remote LAN into the wireguard tunnel. And the default route will go to the internet service provider which is the gateway for the internet. Will that work?
How to setup gateway on second DHCP server?
I finally got it to work by enabling Google 2-step verification and using an app-specific password for mutt. More detail: I enabled 2-step verification on my Google account, which means that when I log in to Google, I have to enter a pin number from either a text or from the Google Authenticator app. Then I had to get an app-specific password for mutt. You can generate an app specific password here. Then I used that app-specific password for logging into mutt instead of my normal password. And then I don't have to enter a pin number.
When I try to log in to gmail with mutt, it flashes a quick Webalert with a url, something like accounts.gmail.com or something. It's too quick for me to see or copy it. Then it says Login failed. Then I get an email from Gmail saying: Google Account: sign-in attempt blockedHi Adam, We recently blocked a sign-in attempt to your Google Account [[emailprotected]]. Sign in attempt details Date & Time: Wednesday, December 10, 2014 11:55:21 PM UTC Location: Utah, USA If this wasn't you Please review your Account Activity page at https://security.google.com/settings/security/activity to see if anything looks suspicious. Whoever tried to sign in to your account knows your password; we recommend that you change it right away. If this was you You can switch to an app made by Google such as Gmail to access your account (recommended) or change your settings at https://www.google.com/settings/security/lesssecureapps so that your account is no longer protected by modern security standards. To learn more, see https://support.google.com/accounts/answer/6010255. Sincerely, The Google Accounts teamI can go to the link and enable "Access for less secure apps" and then I can log in just fine, but is there a way to login with mutt without having to turn on this less secure option in Gmail? Update: I'm on mac os x Yosemite When I run mutt -v, in the compile options, it does contain +USE_SSL_OPENSSL I'm not using google 2-step verification I'm not using an application specific password Here are the messages that I get when I try to log in: Reading imaps://imap.gmail.com:993/INBOX... Looking up imap.gmail.com... Connecting to imap.gmail.com... TLSv1.2 connection using TLSv1/SSLv3 (ECDHE-RSA-AES128-GCM-SHA256) Logging in... [WEBALERT https://accounts.google.com/ContinueSignIn?sarp=1&scc=1&plt=AKgnsbsm0P......I found this answer, but it didn't work: https://stackoverflow.com/a/25209735/1665818
Gmail blocking mutt
A 7 year old question, which I've searched for now, and there are a few answers, most of are spot on. But I feel at least one is missing, and there is probably room for more: Timeline of answers:Back in 2014 Mehmet mentioned imapsync This is probably still the most focused solution maintained out there, as this is an active stream of revenue for the author Gilles Lamiral. The source is available, currently the latest code is on GitHubAlthough not available as a distro package (like some of the other options), it does have an official docker-hub hosted image at gilleslamiral/imapsync. For more info see: https://imapsync.lamiral.info/INSTALL.d/DockerfileIt seems someone also created a docker-image for the WebUI.Back in 2017 Quarind mentioned imap-backup This is a ruby based solution, it looks like it's still being maintained.Back in 2021 Patrick Decat mentioned OfflineIMAP offlineimap is Python2 based and not really maintained. offlineimap3 is a Python3 based fork that is actively maintained Available in most distrosMy research led me to these additional options:isync (the package name for the mbsync command) Home page | Arch Wiki Page | Distro/Package availabilityThe packages below are available on Debian 11 (bullseye), but I don't know much about them yet:imapcopy Unmainted since ~2009 interimap Still actively maintained at developer's website mailsync On SourceForge mswatch repo. Requires something to do the actual syncing. vdirsyncer site. Companion to other IMAP synchers, for syncing Calendar and Contacts.Update 2022-05 Specifically for Gmail / Google Workspace mailboxes*: * Not an IMAP solution, but might be related to somebody's search, so I feel it's worth mentioningGot Your Back Gmvault GitHubAs I learn more I'll update this, as I'm actively looking for a solution myself.
Which Linux tools help to backup and restore a IMAP mail account including all mail and subfolders? I expect disconnects for large IMAP accounts because of ressource limitiations on the server risk of an interruption increases with the duration.The software should be able to reconnect and continue the job after any interruption. For repeating backups it might be very handy to use incremental backups and to run the backup script in a cron job.
Backup and restore IMAP mail account with (open source) Linux tools
WARNING: The request does not follow best security practice because you disable TLS (encryption) on your main mail relay port, exposing data sent through that port to third-party listeners and/or in-flight modification. The answer below satisfies the request, but best practice requires STARTTLS for the port 25 connection as well. The master.cf file (usually /etc/postfix/master.cf) controls the startup and configuration of specific Postfix services. A configuration like this in that file, according to the documentation, will do what you want: smtp inet n - - - - smtpd -o smtpd_tls_security_level=none -o smtpd_sasl_auth_enable=nosmtps inet n - - - - smtpd -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,rejectThis configuration turns off authentication and the STARTTLS option on port 25. It turns on the STARTTLS option on port 465, requires STARTTLS usage, enables authentication, and only allows clients to connect if authenticated. You might also look into the smtpd_tls_wrappermode option to force true TLS connections (and not STARTTLS connections). Note that this kind of configuration can make the Postfix configuration somewhat difficult to follow (options may be set in main.cf and then overridden in master.cf). The other option is to run multiple instances of Postfix, each with their own main.cf configuration files that specify these options.
When using Postfix and IMAP on a mailserver, at least 3 ports are usually opened 25 smtp : incoming emails from anybody (whole internet) 465 smtps : outgoing emails from authorized users (to the whole intenet) 993 imap : imap for authorized usersI would like to configure postfix, so that authorized users can only send email through 465. By default this is not so. Users can also use STARTTLS over port 25. I would like to disable that. My plan is to use port 25 for the public sending me email use port 465 for my users (I can use firewall to allow specific IP ranges, or use custom port) This would prevent port 25 being exploitable from brute force attacks, where hackers try to guess user/password. Port 25 simply would not accept user/password, even if it were valid. And since port 465 is restricted by firewall, hackers cannot exploit 465 either. Is this possible in Postfix? I am using Postfix 2.9.6-2 on Debian Wheezy
Postfix: disable authentication through port 25
You can set username and password directly, but it doesn't work when you use an account-hook, so probably the account-hook doesn't work. An account-hook consists of a regexp for the mailboxes, and those commands which should be executed if a mailbox matches the regexp. Since the commands (set imap_user, set imap_pass) are not executed, we can assume that the regexp didn't match your mailboxes. You are using 'imaps://mail.domain.net:993/INBOX/' which is very specific. Probably your mailboxes are named slightly different. Is this the only mail account from mail.domain.net you are using? If so, reducing the regexp to 'mail.domain.net' should be enough to match your mailboxes. account-hook . 'unset imap_user; unset imap_pass; unset tunnel account-hook mail.domain.net "set [emailprotected]" account-hook mail.domain.net "set imap_pass=${my_password}"
Unsetting mutt's configuration variables imap_user, imap_pass (and perhaps preconnect, imap_authenticators as well) via an account-hook . "unset ... " call, seems to be common practice, if not a necessity, for handling multiple imap accounts (see Managing multiple IMAP/POP accounts (OPTIONAL), Mutt imap multiple account, mutt: gmail IMAP unresponsive, an account-hook related configuration file in funtoo.org). Currently I handle only one account via IMAP. Plans for multiple account handling lead me to follow the instructions found in the last of the above mentioned links (someone's example of mutt configuration). Therefore, in a similar way, I used the following: account-hook . 'unset imap_user; unset imap_pass; unset tunnel' account-hook 'imaps://mail.domain.net:993/INBOX/' "set [emailprotected]" account-hook 'imaps://mail.domain.net:993/INBOX/' "set imap_pass=${my_password}"This is stored in a separate file (named account_hooks) and sourced from inside muttrc. For reasons I don't understand, mutt keeps asking for the username and the password. However, if the variables imap_user and imap_pass are set directly in muttrc, e.g. set my_password="`gpg --decrypt ~/.mutt/password.gpg`" set imap_authenticators='login' set imap_login = '[emailprotected]' set imap_user = '[emailprotected]' set imap_pass ="${my_password}"everything works fine. The account_hooks file is the first one sourced and no other account-hook . "unset ..." call(s) exist(s) anywhere else. Update, The folder-hooks file is (and was, I think) as follows: #-------------------------------------------------------------------------- # Folders and hooks #-------------------------------------------------------------------------- # folder-hook 'imaps://UserName%[emailprotected]:993/' set folder = "~/.maildir" # IMAP: local, using offlineimap -- folder="imaps://mail.domain.net:993/INBOX/" source ~/.mutt/mailboxes # source automatically generated mailboxes set spoolfile = "+INBOX" # spoolfile='imaps://mail.domain.net:993/' set postponed = "+INBOX/Drafts"# Sending ----------------------------------------------------------------- set smtp_url="smtp://[emailprotected]@mail.domain.net:587/" set smtp_pass=${my_password} set record = "+INBOX/Sent" set copy=yes# Index format ---------------------------------------------------------------- folder-hook *[sS]ent* 'set sort=threads' folder-hook *[sS]ent* 'set sort_browser=reverse-date' folder-hook *[sS]ent* 'set sort_aux=reverse-last-date-received' folder-hook *[sS]ent* 'set index_format="%2C | %Z [%d] %-30.30t (%-4.4c) %s"' folder-hook ! *[sS]ent* 'set index_format="%2C | %Z [%d] %-30.30F (%-4.4c) %s"':Why does, the separate file account_hooks, not feed properly the variables of interest in this case (i.e. imap_user and imap_pass)?
Why does mutt keep asking for imap username and password?
I am now using Trysterobiff. It is a non-polling IMAP mail notifier for the systray. It implements the requirements, including the execution of external commands and does not crash. I've written it using Qt, thus Trysterobiff is quite portable. The non-polling operation is implemented using the IDLE extension of IMAP, i.e. you are immedialtely notified of new mail (in contrast to a polling approach).
I am searching for a small new-email-notifier for IMAP mailboxes that displays it's status in the icon-bar (how do you call it?) of a window manager. Basically some biff/xbiff like tool ported to 21th century technology. ;) I am using awesomewm, which is able to display in its taskbar the 'applets' (?), which also work under gnome (I guess that it implements some freedesktop standard). Basic requirements:should not waste memory/CPU (e.g. a pythonGTK based solution probably would) support for IMAPS, and should check the host TLS certificate configurable poll intervalls should not distract too much nice interfaceNice to have:optional configuration of a user defined action (executing an external command)
IMAP mail notifier for window manager/task bar?
Dovecot supports the IMAP SEARCH function, plus it's a pretty simple IMAP service to run. It can read a variety of mailbox formats, so as long as you use fetchmail to deliver into the appropriate format (or to procmail), it should work fine. As for webmail interfaces, there are so many, I wouldn't know where to start. I like RoundCube, but it's more for the traditional IMAP mail format with lots of folders, so it might not fit your needs.
I'm quitting GMail but attempting to avoid the headaches that come with administering my own Internet-facing IMAP server. I have access to a commercial IMAP account, and I'd like to continue to use that account. Basically the way I'd like it to work is that mail is downloaded to my server and deleted from my mailbox on the commercial IMAP server. Then once on my server it's served up via IMAP and webmail, so that it can be used on mobile + internet devices, and remains in sync across both. Things it must do:Be accessible from all devices Relatively fast searching Threaded viewing of message replies (preferably with my sent mails interspersed)Here's how I see it working: Mail arrives at commercial IMAP server -> On my server, fetchmail downloads via IMAP and delivers to -> MDA/MTA, which updates a search cache for rapid searching and makes it available via IMAP to -> Horde IMP, which caches the e-mails in the inbox for faster previewingThe Question I'm looking for an MDA/MTA pair that can be delivered to by fetchmail, emphasizes security, and supports search cacheing on this scale (so when I search the inbox in IMP it doesn't take forever). My plan is to be GMail-like in that I'll likely not put things in folders but leave everything in the Inbox and search it when I need to find anything. Any other thoughts on the sanity/insanity of this welcome, but my main concern is the MDA/MTA.
Roll-your-own GMail alternative
You can rename IMAP folder in mutt, while you are changing a folder and you are in the list of folders: 'c?' (change folder, then use a list of folders). When you are on the folder, which has to been renamed, use 'r' key and you will be asked for the new name of folder.
How does one interactively rename an IMAP folder within the mutt MUA? Currently, if I want to change a folder name, I use the gmail web browser interface, but this is rather inefficient. Is there an equivalent of the unix mv command in mutt? When I search for this topic on google, the search results pertain to renaming local mutt folders and files like .muttrc.
mutt: rename IMAP folder
Its similar to how the data in a rm'd file will still be there until the disk space is reused. Thunderbird marks it free, but doesn't actually free it (which could involve moving later messages in the file, etc.). The way to make it actually free it is to compact the folder. Right-click the trash and select 'Compact'. That should do it. You can also select 'Compact Folders' under the File menu to do all folders.
I am using Thunderbird + IMAP. Thunderbird caches messages locally in: .icedove/asdfgh.default/ImapMail/mail.example.com/For each IMAP folder, there are files Folder and Folder.msf. I have noticed, when I delete an email with large attachment, and then delete it from Trash also, than the trash file .icedove/asdfgh.default/ImapMail/mail.example.com/Trash still contains the email (and the attachment). Thus, even though from within Thunderbird it looks as if the message has been deleted (or expunged from trash, or what ever the term is), the message data is still in the trash file (I can see it when I open the Trash file with my text editor) Can anybody please explain what is happening here? How can I really delete an email? And I should add that the email has been successfully deleted on the IMAP server. So, Thunderbird has deleted the email on the server, but for some reason still keeps the data in the file.
Thunderbird: deleted emails are still in local IMAP folder
You could call a small script from your tmux status bar that updates with any new mail: #!/bin/bash # Set maildirs maildirs="$HOME/Mail/*/INBOX/new/"find $maildirs -type f | wc -l And in your .tmux.conf: set -g status-right "#[fg=yellow,bright]Mail: #(tmuxmail) ..." This count will be updated according to the status-interval value, eg.,: set -g status-interval 1
I am on tmux, with mutt in an inactive window. If IMAP flags change on a message through external means, I see the visual bell and the status bar changes, drawing my attention to the e-mail client. That works well. What I would like to do but still can't get to work is:Get a proper notification about incoming e-mail whilst in another tmux window Never get notifications later than 5 minutes from their arrival, possibly fine-tuning this intervalI use Gmail over IMAPS.
Mutt new e-mail notifications in tmux window
There are several options depending on what you are wanting to achieve and what you are wanting to do to get there.Get the IMAP server to do the filtering for you. This is sometimes an option in web-mail based solutions and allows you to filter the messages based on e.g. the addresses listed in the To: or Cc: header of each mail. I'm not familiar with Gmail's offerings in this regard. Manually mark the messages in mutt and copy them to a new folder on the IMAP server, or to a local mailbox. Mark the messages you want to move with T followed by the search pattern ~C [emailprotected] (this tags all messages that were either sent directly to or Cc-ed to the address [emailprotected]). Then press ; followed by s to apply the "save" (move) command to all tagged messages. Then enter the IMAP folder path you want to save the messages to. The IMAP folder path should be specified as imap[s]:[user[:pw]@]imapserver.example.com[:port]/pathJust to say that the IMAP server that I have access to doesn't like this. There are no errors, but the messages are clearly not copied. Test it on a less important message first! You may obviously save the messages locally instead! You may also define a macro in mutt to do this. Download the messages from the IMAP server and filter and read them locally.I tend to download the messages off the IMAP server using fetchmail. This gives me the opportunity to do my own spam filtering and mail sorting on my local machine. For both these tasks I use procmail1 which is a fairly advanced mail processing program. The essential configuration for fetchmail that I use is poll myimapserver.example.com protocol imap user "myimapusername" password "myimappassword" is "mylocalusername" mda "/usr/local/bin/procmail -m $HOME/.procmailrc" ssl sslcertfile /etc/ssl/cert.pem sslcertck idleThis will fetch any new messages off the IMAP server as they arrive, and deliver them to procmail for processing. Paths etc. will be different on your system. Then I filter with procmail using a configuration ($HOME/.procmailrc) like MAILDIR="$HOME/Mail" DEFAULT="inbox/":0 * ^[emailprotected] openbsd-announce/:0 * ^[emailprotected] openbsd-misc/... for two of the mailing lists I'm on (they will be stored in subdirectories under $HOME/Mail). Mail not matching any patterns will be stored in $HOME/Mail/inbox as specified by MAILDIR and DEFAULT. I'm using Maildir mailboxes. Remove the trailing slashes on the paths to get mbox mailboxes.1 Note that procmail is retired. I was not aware of this as I've been using is since the 90's without much consideration for any of the up-and-coming alternatives. It seems, after some gentle browsing on the interwebs, that maildrop is considered a good alternative to procmail, and I might look into moving my filtering over to maildrop myself.
I have a Gmail account, mutt is configured to get the mail through IMAP. Yesterday I subscribed to a mailing list and now my personal emails are mixed up with the ones from the list. The list emails are addressed to me and [emailprotected]. How can I tell mutt to move all such emails to a separate file, so they wouldn't be mixed with my emails. But I still could read them, opening that file?
How to categorize incoming emails
Use getmail. It's a nice python program which can be used to download mails from servers. The website is a bit dated, but the software is recent and well maintained. Here is an example config file: [options] delete = False[retriever] type = SimpleIMAPSSLRetriever server = my-servername username = my-username password = my-password[destination] type = Maildir path = ~/Maildir/As you can see, one can define where the mail is to be safed. Multiple mailbox formats are supported. You could also hand mail over to a local IMAP server, e.g. dovecot. If you don't want to use SSL, use SimpleIMAPRetriever instead of SimpleIMAPSSLRetriever.
I have headless debian/raspbian linux machine and I would like to backup all my emails via IMAP including all mail and subfolders once daily (connection is secured with SSL/TLS. it should run automatically from cronjob every day). This backup should store the same emails as I have on my default mailserver - so it means when I am working from another computer whole day, it should be able to sync my work (that's why I want to use IMAP). Ideally I would like to have all my emails in readable format on backup machine, if main mailserver fails. Any idea how this can be done?
backup emails from IMAP in readable form
By default Dovecot uses Maildir++ directory layout for organizing mailbox directories. This means that all the folders are directly inside ~/Maildir directory, and: ~/Maildir/new, ~/Maildir/cur and ~/Maildir/tmp directories contain the messages for INBOX.You can read more about the layout here Thus what you complain about is standard behavior. You can change the layout nevertheless, by using the LAYOUT and INBOX options. To have cur, new, tmp inside Inbox as you require: $HOME/Mail/inbox/{cur,new,tmp}you could specify the following option in /etc/dovecot/conf.d/10-mail.conf: mail_location = maildir:~/Mail:INBOX=~/Mail/inbox:LAYOUT=fs
I have set up Dovecot on my Postfix mailserver. My mailserver is using Maildir format: home_mailbox = Mail/inbox/A user's Mail directory looks like this: $HOME/Mail/inbox $HOME/Mail/drafts $HOME/Mail/sent $HOME/Mail/trashI have set up mailboxes in Dovecot accordingly mail_location = maildir:~/Mailnamespace inbox { mailbox drafts { special_use = \Drafts } mailbox sent { special_use = \Sent } mailbox trash { special_use = \Trash } }Now, the problem is, Dovecot does not use the mailboxes as defined, but creates its own mailboxes named with a . in front and with first letter capital: $HOME/Mail/.Drafts $HOME/Mail/.Sent $HOME/Mail/.TrashFurther, instead of using $HOME/Mail/inbox as inbox, it uses $HOME/Mail as inbox. i.e. it created the cur/new/tmp directories directly in $HOME/Mail/, rather than using the existing $HOME/Mail/inbox: $HOME/Mail/cur $HOME/Mail/new $HOME/Mail/tmpSUMMARY: explained briefly, what I need is the following: I have an existing Maildir folder structure where Postfix delivers mail, plus the usual folders (drafts, sent, ...): $HOME/Mail/inbox/{cur,new,tmp} $HOME/Mail/drafts/{cur,new,tmp} $HOME/Mail/sent/{cur,new,tmp} $HOME/Mail/trash/{cur,new,tmp}How can I tell Dovecot to use the correct directories?
Dovecot ignores settings for mailboxes
Do as recommended by Andreatsh in the comments. Go to http://myaccount.google.com Then "Sign-in & security" -> "Signin in to Google" -> "App password" Once you create the one time password you will also have to run: touch ~/.pine-passfileThis makes it so when you enter the gmail folder on Alpine you will be asked if you want to save the password.
I always get a message: IMAP Authentication canceled And then: Retrying plain authentication after [ALERT] application-specificWhen I look at my google security settings I can't find any option to create an application specific password to associate with Alpine on my laptop. https://productforums.google.com/forum/#!topic/gmail/bSQZVxRIjb0
pine (Alpine) with GMail 2-step Authentication enabled?
Try isync which sounds like it should fit your purpose: isync is a command line application which synchronizes mailboxes; currently Maildir and IMAP4 mailboxes are supported. New messages, message deletions and flag changes can be propagated both ways. isync is suitable for use in IMAP-disconnected mode.
I search a solution to download (for archiving) a large mail directory via imap from a mail-server and stores all mails in a local maildir. I need a robust solution, which repeatedly tries if time-outs occur. So far I have tried using regular mail clients for this. Apparently the server restricts downloads in a way which confuses the clients I have tried so far in that they eventually give up and even loose mails. I am thinking of a tool like fetchmail, but the howtos I have seen are either a bit long or don't fit exactly my needs, such as http://llg.cubic.org/docs/imapbackup.html which describes download to mbox files. It remains unclear to me whether the maildir format is supported.
Download big imap directory to maildir
No. When you read email via IMAP, the mail stays on the server. The client just downloads individual messages as needed to display them. When you mark it read or move it to a folder, the client just sends a message to the server asking it to do that. When fetchmail downloads a local copy, what happens to that copy is not reflected on the server side. If you want the things you do to your mail to be reflected on the server, then you don't want to use fetchmail. You want an IMAP-enabled mail client, of which there are many for Linux. It looks like the only Linux client officially supported by Google is Thunderbird, but other clients are likely to work also.
I'm studying and learning about how mail is handled in linux systems, and one thing has become a source of confusion for me. On my iPhone, via IMAP, I can mark a message in my Gmail inbox as read, or I can move it to another folder ("label" in Gmail speak). Then when I later view my Gmail account via web interface, these changes have percolated to the Gmail server. However, given my linux client, I have read that all fetchmail does is fetch mail (pun unintended), rather than deliver it. The delivery part would be the responsibility of procmail or postfix. But if fetchmail just hands off the delivery part to procmail or postfix, it doesn't seem like it would have any way of knowing whether that email was later marked as read or saved to a specific IMAP folder. In fact, it seems like the idea of an "IMAP folder" wouldn't even seem to exist any longer at that point! Does fetchmail actually do some creation or marking of "IMAP folders"? So is it possible to use fetchmail to get local copies of mail from the server, yet still keep the IMAP features of marking messages as read and moving them to specific folders? If so, how?
Does fetchmail support these IMAP features? If so, how?
Gmail does not allow IMAP access by default from clients that don't meet its nebulously-defined security standards - I ran into the same thing testing some scripts I was writing with Python's imaplib. You need to go to the website and enable connections from less secure apps. Information page from Google (includes direct link to settings screen): https://support.google.com/accounts/answer/6010255
I'm in the process of configuring Gnus to retrieve emails from my gmail account via IMAP. I have done as recommended at https://www.emacswiki.org/emacs/GnusGmail, however I keep on getting errors from Gnus upon startup: Opening connection to imap.gmail.com via tls... nnimap (gmail) open error: 'NO (ALERT) Please log in via your web browser: https://support.google.com/mail/accounts/answer/78754 (Failure)'. Continue? (y or n) y Saving file /home/mark/.newsrc-dribble... Wrote /home/mark/.newsrc-dribble [2 times] Gnus auto-save file exists. Do you want to read it? (y or n) y Opening nnimap server on gmail... Server nnimap+gmail previously determined to be down; not retrying Opening nnimap server on gmail...failed: NO (ALERT) Please log in via your web browser: https://support.google.com/mail/accounts/answer/78754 (Failure) Checking new news... Reading active file from gmail via nnimap... Opening nnimap server on gmail... Server nnimap+gmail previously determined to be down; not retrying Opening nnimap server on gmail...failed: NO (ALERT) Please log in via your web browser: https://support.google.com/mail/accounts/answer/78754 (Failure) Reading active file via nndraft...done Checking new news...done No news is good news Warning: Opening nnimap server on gmail...failed: NO (ALERT) Please log in via your web browser: https://support.google.com/mail/accounts/answer/78754 (Failure); Server nnimap+gmail previously determined to be down; not retry\ ing; Opening nnimap server on gmail...failed: NO (ALERT) Please log in via your web browser: https://support.google.com/mail/accounts/answer/78754 (Failure); Server nnimap+gmail previously determined to be down; not retrying gnus-group-read-group: No group on current lineI have GnuTLS installed and gnutls-cli seems to work with imap.gmail.com:993, I'm getting "OK Gimap ready for requests". Here's my ~/.gnus: (setq gnus-select-method '(nnimap "gmail" (nnimap-address "imap.gmail.com") (nnimap-server-port 993) (nnimap-stream ssl) (nnir-search-engine imap) (nnimap-authinfo-file "~/.authinfo")))(setq smtpmail-smtp-service 587 gnus-ignored-newsgroups "^to\\.\\|^[0-9. ]+\\( \\|$\\)\\|^[\"]\"[#'()]")My ~/.authinfo looks like this: machine imap.gmail.com login [emailprotected] password my_password port 993 machine smtp.gmail.com login [emailprotected] password my_password port 587What could be the problem?
GNU Emacs Gnus can't connect to gmail IMAP
After some back-and-forth through comments and chat the OP's problem is now resolved. The IMAP server needed to be specified as imap.mydomain.com instead of mydomain.com although for some reason this only worked when set using the advanced account settings, not using the account creation dialog box. In addition to mentioning this situation-specific solution, I think that the most useful thing I can say in an answer to this question is to list some generic troubleshooting tips on the topic in the hopes that they will be useful to someone else reading this later.Thunderbird's feature to autodetect account settings is fantastic and a great improvement over the dark days of email account setup when ISPs had to provide long-winded instructions including everything from the server type (POP or IMAP) through the port numbers to the authentication protocol. In an ideal world users would only need to specify their email address, password, and maybe I guess the server name (that's all they need for accessing gmail through the web, after all...). Yet when the autodetection feature doesn't work, you get almost nothing in the way of useful error messages. "Thunderbird failed to find the settings for your email account" means basically nothing. TIP: when Thunderbird's autodetection feature isn't working, don't waste time on it, and fall back to specifying everything manually until it works. Then, once you have it working, you can concentrate of finding out why autodetection failed and maybe fixing it so it will work for the next user. Always use port 143 for IMAP if you can. There is also port 993 for IMAP over SSL, but all reasonably modern clients and servers support STARTTLS for upgrading unencrypted connections to encrypted ones, so there really isn't any need anymore to worry about different ports for IMAP. Connections on port 143 will automatically be encrypted if possible. (Not related to IMAP but anyway) Always use port 587 for SMTP if you can. SMTP used to always be done on port 25, but ISPs frequently block port 25 because of spam. Port 587 was designated specifically for SMTP communication between MUAs and mail servers, is expected to support SMTP AUTH and STARTTLS as necessary, and has already been in existence for many years. There is rarely any need to worry about configuring MUAs to use any other port. Dovecot treats unencrypted connections and encrypted connections differently, and this may also apply to local connections (to localhost, 127.0.0.1 or ::1) versus remote connections. The most common types of authentications are insecure over unencrypted remote connections, so Dovecot does not offer them. Therefore, when testing and debugging through the command line, in order to simulate a real MUA most closely, test remotely and use STARTTLS to encrypt connections. Otherwise you may find that authentication works fine with telnet and still wonder why it doesn't work on the MUA. Test using telnet (for unencrypted connections) or openssl s_client (for encrypted connections) Use the same hostname that you are trying to get the MUA to accept. If you want imap.mydomain.com or mail.mydomain.com or just mydomain.com to work when specified as the mail server in the MUA, test using the same host name from the command line. And if you get a hostname resolution error, you know that the problem lies in DNS. openssl s_client -starttls imap -port 143 -CApath /etc/ssl/certs -host <hostname>If the SSL certificate configured on the Dovecot server has a problem, Thunderbird will warn about that but it will still allow you to connect. If you are completely unable to connect, the certificate is probably not the problem. Of course, once you are ready to go in production, you will want to use a certificate signed by a recognized certificate authority and have the name on the certificate match the IMAP server name that gets configured in MUAs. Useful IMAP commands for testing. Type these into IMAP sessions you open with telnet (unencrypted) or openssl s_client (encrypted) tag1 LOGOUT tag2 LOGIN <username> <password> tag3 CAPABILITIES tag4 LIST "" "*"
A remote CentOS 7 web server is able to successfully receive email sent from elsewhere on the internet addressed to [emailprotected] . An app running on the same CentOS 7 server is able to use JavaMail to make an IMAP connection to the dovecot Maildir where the incoming messages get stored. So what do I have to add in order for Thunderbird running on my devbox to be able to make an IMAP connection to the remote CentOS 7 server across the internet? So far, I added imaps to the public zone of firewalld. I also confirmed that dovecot.conf contains the line protocols = imap pop3. I configure Thunderbird to use IMAP for incoming mail, with mydomain.com as the hostname, with port 993 and SSL with normal password. And I confirmed at my domain registrar's web site that the dns mx entry uses mydomain.com as the mx address. EDIT To answer @Celada's question, I have posted the dialog that Thunderbird gives indicating that it has failed to connect to the server when it tries to confirm my login information. I get the same information when I specify port 993 for imap and port 25 for smtp, and when I indicate SSL connection. Also, changing .mydomain.com to mydomain.com does not eliminate the login failure. I will try to access the firewalld logs next and will post results. My understanding is that firewalld does not log automatically, so I will have to develop some rich rules. It might take some time to identify the proper syntax. I think it is a server config problem. I hesitated to show the Thunderbird dialog because I did not want to give the impression that it is a client issue. I think the server config needs to be determined/set-up before I can set up Thunderbird. EDIT#2 As per @Celada's suggestion, I typed telnet localhost 143 and got the following response: Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS LOGINDISABLED] Dovecot ready. I also typed telnet localhost 25 and then got the following in response: Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 mydomain.com ESMTP PostfixThese telnet results pointed out that firewalld was mapping imaps and smtp to the wrong ports, so I typed nano /usr/lib/firewalld/services/imaps.xml and changed the port from 993 to 143. And then I typed nano /usr/lib/firewalld/services/smtp.xml and changed the port to 25. I then typed firewall-cmd --reload to ensure that the changes were put into effect. Next, I put the new information into Thunderbird and tried a test connection again, but again got a failure message shown by the following dialog box: Note that I checked the MX record in the DNS at my domain registrar, and it is exactly mydomain.com, as shown in the screen shots. I don't see how this is irrelevant. I did check and the hostname on the server is also mydomain.com. Is there some other resource I should be checking to confirm the correct mail server name instead? Also note that dovecot and postfix were installed with a standard configuration. I did not explicitly configure ssl to work with them, though SSL may have been part of the default configuration. I did, however, change the settings in the dialog box above and tested a connection with None specified in the SSL field, but got the same failure message. The dovecot log in /var/log/maillog after the most recent (bottom) screen shot above is: Feb 27 00:52:57 mydomain dovecot: imap-login: Aborted login (no auth attempts in 0 secs): user=<>, rip=my.DEVBOX.ip.addr, lip=my.SERVER.ip.addr, session=<YsH2egsQAABi9AyF>EDIT#3 Following @Bandrami's advice, I changed protocols = imap pop3 in dovecot.conf to protocols = imaps pops. I then made sure that /usr/lib/firewalld/services/imaps.xml specifies port 993. I typed firewall-cmd --reload and systemctl stop dovecot then systemctl start dovecot to restart the relevant processes on the server. I then configured the Thunderbird test to specify port 993 and SSL/TLS and re-ran the connection test in Thunderbird, only to get the same result in Thunderbird. The dovecot logs, however, are a little more explicit, and are as follows: Feb 27 01:18:20 mydomain dovecot: config: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf Feb 27 01:18:20 mydomain dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:26: 'imaps' protocol can no longer be specified (use protocols=imap). to disable n$ Feb 27 01:18:38 mydomain dovecot: imap-login: Disconnected (no auth attempts in 18 secs): user=<>, rip=my.SERVER.ip.addr, lip=127.0.0.1, TLS handshaking: SSL_accept() failed: error:14$ Feb 27 01:19:15 mydomain dovecot: master: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) Feb 27 01:19:15 mydomain dovecot: anvil: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) Feb 27 01:19:15 mydomain dovecot: ssl-params: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) Feb 27 01:19:15 mydomain dovecot: config: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) Feb 27 01:19:15 mydomain dovecot: auth: Error: read(anvil-auth-penalty) failed: EOF Feb 27 01:19:15 mydomain dovecot: auth: Error: net_connect_unix(anvil-auth-penalty) failed: Permission denied Feb 27 01:19:15 mydomain dovecot: auth: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) Feb 27 01:19:15 mydomain dovecot: log: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) Feb 27 01:19:22 mydomain dovecot: master: Dovecot v2.2.10 starting up for pop3, imap (core dumps disabled) Feb 27 01:19:44 mydomain dovecot: imap-login: Disconnected (no auth attempts in 15 secs): user=<>, rip=my.SERVER.ip.addr, lip=127.0.0.1, TLS handshaking: SSL_accept() failed: error:14$ Feb 27 01:23:55 mydomain postfix/qmgr[30121]: 2C915811BD1C: from=<[emailprotected]>, size=5316, nrcpt=1 (queue active) Feb 27 01:23:58 mydomain postfix/smtp[27144]: 2C915811BD1C: to=<address@domain_that_sends_to_this_addresson_server.com>, relay=none, delay=290245, delays=290241/0.02/3.6/0, dsn=4.4.3, status=deferred (Host or domain$ Feb 27 01:24:41 mydomain dovecot: config: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf Feb 27 01:24:41 mydomain dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:26: 'imaps' protocol can no longer be specified (use protocols=imap). to disable n$ Feb 27 01:24:41 mydomain dovecot: config: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf Feb 27 01:24:41 mydomain dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:26: 'imaps' protocol can no longer be specified (use protocols=imap). to disable n$ Feb 27 01:24:53 mydomain dovecot: imap-login: Disconnected (no auth attempts in 12 secs): user=<>, rip=my.SERVER.ip.addr, lip=127.0.0.1, TLS handshaking: SSL_accept() failed: error:14$ Feb 27 01:25:05 mydomain dovecot: imap-login: Aborted login (no auth attempts in 1 secs): user=<>, rip=my.DEVBOX.ip.addr, lip=my.SERVER.ip.addr, TLS, session=<Kdrl7QsQxwBi9AyF> Feb 27 01:27:16 mydomain dovecot: master: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) Feb 27 01:27:16 mydomain dovecot: anvil: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) Feb 27 01:27:16 mydomain dovecot: log: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) Feb 27 01:27:24 mydomain dovecot: master: Dovecot v2.2.10 starting up for pop3, imap (core dumps disabled) Feb 27 01:27:24 mydomain dovecot: config: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf Feb 27 01:27:24 mydomain dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:26: 'imaps' protocol can no longer be specified (use protocols=imap). to disable n$EDIT#4 As per @Celada's further clarification, I typed telnet imap.mydomain.com 143, in the local devbox that I've been using for Thunderbird testing, and the terminal replied with: Trying my.SERVER.ip.addr... Connected to imap.mydomain.com. Escape character is '^]'. * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS LOGINDISABLED] Dovecot ready. Next, I typed in openssl s_client -CApath /etc/ssl/certs -starttls imap -port 143 -host imap.mydomain.com at the devbox terminal, and the terminal replied by printing out the details which you can read by clicking on this link to a file sharing site. My complete dovecot.conf can be read at a file sharing site by clicking on this link. EDIT#5 As per @Celada's suggestion, I typed t1 login USERNAME PASSWORD after . OK Pre-login capabilities listed, post-login capabilities have more., and the terminal replied with the following: * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS THREAD=ORDEREDSUBJECT MULTIAPPEND URL-PARTIAL CATENATE UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS SPECIAL-USE BINARY MOVE t1 OK Logged inHowever, I then repeated the Thunderbird login test, and I checked to force Thunderbird to use port 143 and "Normal password". When I did this, Thunderbird forced "Autodetect" as the SSL option, and then clicking the "Re-test" button resulted in the same error message: "Thunderbird failed to find the settings for your email account."
unable establish remote imap connection, why not?
Ok so the good guys over at daemonforums.org solved it for me! Disabled IMAP_MAILBOX_SANITY_CHECK=0 in /etc/courier/imapd and added that exact same line to /etc/courier/imapd-ssl did the trick. Also not sure if it helped or if it would have worked anyway i did: maildirmake /storage/vmail/anton/Maildir which is the Maildir for my account, obviously the path will differ from anyone tracing my footsteps in this issue.
so i poked around and found out that DEFDOMAIN="@domain.se" is messing things up, so i removed that from /etc/courier/imapd and i got to the point where SMTP work and i get this from the IMAP: Jul 2 13:23:10 HOST authdaemond: Authenticated: sysusername=anton, sysuserid=<null>, sysgroupid=20001, homedir=/storage/vmail/anton, address=anton, fullname=Anton, maildir=<null>, quota=<null>, options=<null> Jul 2 13:23:10 HOST authdaemond: Authenticated: clearpasswd=MyPasswd, passwd=$3e$04$AC1c10x0A3etWCJFrla.Rl2sevNhq24yXYxrq8IN7mEeGI20. Jul 2 13:23:10 HOST imapd-ssl: anton: Account's mailbox directory is not owned by the correct uid or gidBut i'm not sure why because: # ls -l /storage/vmail/ -rw-r--r-- 1 vmail vmail 22 Mar 13 01:06 .Xdefaults -rw-r--r-- 1 vmail vmail 773 Mar 13 01:06 .cshrc -rw-r--r-- 1 vmail vmail 398 Mar 13 01:06 .login -rw-r--r-- 1 vmail vmail 113 Mar 13 01:06 .mailrc -rw-r--r-- 1 vmail vmail 218 Mar 13 01:06 .profile drwx------ 2 vmail vmail 512 Jun 30 10:44 .ssh drwxr-xr-x 3 anton anton 512 Jun 30 10:44 antonmy /etc/courier/imapd says: MAILDIRPATH=/storage/vmailBut i've also tried: MAILDIRPATH=MaildirAnd /etc/passwd states: # cat /etc/passwd | grep anton anton:*:20001:20001:Anton:/storage/vmail/anton:/sbin/nologinWhere am i going wrong?
Courier IMAP - Account's mailbox directory is not owned by the correct uid or gid
The authentication type PLAIN means there is no specific security protocol for the password itself on the IMAP protocol layer. But the authentication still happens within the TLS1.2 connection, so unless TLS negotiation has accepted a NULL encryption, the whole connection, including the transmission of the password is protected by the TLS1.2. To identify the actual strength of the TLS1.2 encryption, you would need to find the actual encryption algorithms and key lengths negotiated on the connection. The <some string of letters and numbers> part in the "SSL/TLS connection using TLS 1.2" message contains this information.
I have recently started using mutt to access my email account via IMAP. My IMAP connection settings are as follows: set ssl_starttls = yes set ssl_force_tls = yes set imap_user = "[emailprotected]" set smtp_url = "smtp://[emailprotected]@smtp.domain.tld:[port]/" set folder = "imaps://imap.domain.tld:[port]" set hostname = domain.tldI have not stored my password so I have to type in my password every time I login When I start mutt I see the following on the bottom line: SSL/TLS connection using TLS1.2 (<some string of letters and numbers>) When I type in my password on being prompted I see the following in the bottom line of my mutt window: Authenticating (PLAIN)... Does this mean that mutt is transmitting my password in plaintext? Thank you for your help.
IMAP authentication by Mutt: Is Mutt transmitting password in plaintext?