Blog
Blog
screen / wipes copy buffer
A mismash of bugs and workarounds causes the copy buffer (X selection) to get wiped some of the time in my recent desktop environment. And that in a seemingly unpredictable manner. The following bug is mostly in play: GNOME VTE soft reset wipes selection That bug causes: reset(1) to wipe the middle-mouse (primary) buffer (although this differs per system — could not put my finger on this); reset(1) to wipe the clipboard buffer, but only if the reset was called from window that originated the current clipboard buffer contents; GNU screen(1) initialization to misbehave as reset does, as described above — even through an ssh session — by wiping the buffer, if TERM=xterm-256color.
dovecot / roundcube / mail read error
Today we ran into a dovecot/imap crash on a Xenial box. The Dovecot in question was the patched dovecot-2.2.22. Due to an as of yet unexplained cause, reading mail through Thunderbird mail client worked fine, but when opening a message with Roundcube (webmail), most messages would give an odd error about a “message that could not be opened”. An IMAP trace of Roundcube revealed that the IMAP server stopped responding after the client A0004 UID FETCH command.
Meltdown & Spectre attacks
Information regarding Meltdown and Spectre attacks. Current state Waiting for software patch availability. Patched ubuntu kernels are available for testing. Updates: 20180104: Created blogpost 20180105: Added new information/links 20180105: Status update 20180108: Added information from Redhat about performance impact from patches. 20180108: Updated links list. 20180108: Status update Links https://spectreattack.com / https://meltdownattack.com (same site) https://arstechnica.com/gadgets/2018/01/meltdown-and-spectre-every-modern-processor-has-unfixable-security-flaws/ https://arstechnica.com/gadgets/2018/01/meltdown-and-spectre-heres-what-intel-apple-microsoft-others-are-doing-about-it/ https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulnerable-to-spectre-or-meltdown/ High level description CVE’s Spectre - CVE-2017-5715 Spectre - CVE-2017-5753 Meltdown - CVE-2017-5754 As described on: https://spectreattack.
Recap 2017
2017 ISO27001 certified + NEN7510 Ewout joined our team as an SRE More projects opensourced at github: https://github.com/ossobv Uradesign designed logo’s for several of our open source projects Started providing Kubernetes as a Service / Managed Kubernetes Lots of interesting stuff
flake8 / vim / python2 / python3
In 2015, I wrote a quick recipe to use Vim F7-key flake8 checking for both python2 and python3 using the nvie/vim-flake8 Vim plugin. Here's a quick update that works today. Tested on Ubuntu/Zesty. $ sudo apt-get install python3-flake8 # py3 version, no cli $ sudo pip -H install python-flake8 # py2 version, with cli $ sudo cp /usr/local/bin/flake8{,.2} # copy to flake8.2 $ sudo sed -i -e 's/python$/python3/' /usr/local/bin/flake8 # update shebang $ sudo mv /usr/local/bin/flake8{,.
Availability during holiday December 2017
Starting the 16th of December we are on leave. We return to the office on the 2nd of January. During this period we are available 24/7 for incident response and other urgent matters as usual. If you already know of any urgent requests which needs to be handled during this period, please inform us in advance so we can plan the required availability.
reprepro / multiversion / build recipe
We used to use reprepro (4.17) to manage our package repository. However, it did not support serving multiple versions of the same package. The Benjamin Drung version from GitHub/profitbricks/reprepro does. Here’s our recipe to build it. $ git clone -b 5.1.1-multiple-versions https://github.com/profitbricks/reprepro.git $ cd reprepro It lacks a couple of tags, so we’ll add some lightweight ones. $ git tag 4.17.1 2d93fa35dd917077e9248c7e564648da3a5f1fe3 && git tag 4.17.1-1 0c9f0f44a84f67ee5f14bccf6507540d4f7f8e39 && git tag 5.
Maintenance network Mediacentrale Nov 1st 2017 - 22:00
Maintenance network Mediacentrale On November 1st 2017 after 22:00 we will upgrade our network in the Mediacentrale. Due to roadworks around Julianaplein in Groningen that will impact our current connections, we will move our network traffic to an upgraded router and fiber path, thus minimizing downtime related to these roadworks. Furthermore, this maintenance also results in an upgrade in our capacity to the Mediacentrale, as we will upgrade from a 1G to 10G infrastructure.
linux / process uptime / exact
How to get (semi)exact uptime values for processes? If you look at the ps faux listing, you’ll see a bunch of values: walter 27311 0.8 1.8 5904852 621728 ? SLl sep06 61:05 \_ /usr/lib/chromium-browser/... walter 27314 0.0 0.2 815508 80852 ? S sep06 0:00 | \_ /usr/lib/chromium-brow... walter 27316 0.0 0.0 815508 14132 ? S sep06 0:01 | | \_ /usr/lib/chromium-... That second value (27311) is the PID, the tenth (61:05) how much CPU time has been spent.
sudo / cron / silence logging / authlog
Do you use sudo for automated tasks? For instance to let the Zabbix agent access privileged information? Then your auth.log may look a bit flooded, like this: Aug 30 10:51:44 sudo: zabbix : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/sbin/iptables -S INPUT Aug 30 10:51:44 sudo: pam_unix(sudo:session): session opened for user root by (uid=0) Aug 30 10:51:44 sudo: pam_unix(sudo:session): session closed for user root Or, if you run periodic jobs by root from cron, you get this:
powerdns / pdnsutil / remove-record
The PowerDNS nameserver pdnsutil utility has an add-record, but no remove-record. How can we remove records programmatically for many domains at once? Step one: make sure we can list all domains. For our PowerDNS 4 setup, we could do the following: $ list_all() { ( for type in master native; do pdnsutil list-all-zones $type; done ) | grep -vE '^.$|:' | sort -V; } $ list_all domain1.tld domain2.tld ... Step two: filter the domains where we want to remove anything.
gdb / debugging asterisk / ao2_containers
One of our Asterisk telephony machines appeared to “leak” queue member agents. That is, refuse to ring them because they were supposedly busy. When trying to find the cause, there weren’t any data dumping functions for the container I wanted to inspect in the CLI. In this case the pending_members which is of type struct ao2_container. So, we had to resort to using gdb to inspect the data. The struct ao2_container container data type itself looks like this:
letsencrypt / expiry mails / unsubscribe
Today I got one of these Letsencrypt Expiry mails again. It looks like this: Your certificate (or certificates) for the names listed below will expire in 19 days (on 21 Jun 17 19:38 +0000). Please make sure to renew your certificate before then, or visitors to your website will encounter errors. [domain here] ... If you want to stop receiving all email from this address, click [link here] (Warning: this is a one-click action that cannot be undone) I don’t need this particular domain anymore.
puppet / pip_version / facter
Every once in a while I have to deal with machines provisioned by puppet. I can’t seem to get used to the fact that --test not only tests, but actually does. It displays what it does though output, which is nice. To test without applying, you need the --noop flag. But, today I wanted to bring up the quick fix to this old warning/error: Error: Facter: error while resolving custom fact "pip_version": undefined method `[]' for nil:NilClass The cause of the issue is an old version of pip(1) which has no --version parameter.
ubuntu zesty / apt / dns timeout / srv records
Ever since I updated from Ubuntu/Yakkety to Zesty, my apt-get(1) would sit and wait a while before doing actual work: $ sudo apt-get update 0% [Working] Madness. Let’s see what it’s doing… $ sudo strace -f -s 512 apt-get update ... [pid 5603] connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("127.0.0.1")}, 16) = 0 ... [pid 5603] sendto(3, "\1\271\1\0\0\1\0\0\0\0\0\0\5_http\4_tcp\3ppa\tlaunchpad\3net\0\0!\0\1", 46, MSG_NOSIGNAL, NULL, 0) = 46 [pid 5603] poll([{fd=3, events=POLLIN}], 1, 5000 <unfinished ...> ... [pid 5600] select(8, [5 6 7], [], NULL, {0, 500000}) = 0 (Timeout) .
squashing old git history
You may have an internal project that you wish to open source. When starting the project, you didn’t take that into account, so it’s likely to contain references to private data that you do not wish to share. Step one would be to clean things up. If this is a slow process, this can take time, while in the mean time the project gets updates. Now, at one point you’re confident that at commit X1000, the project contains only non-private data.
Loadbalancer maintenance 22nd february 2017
In the night of Wednesday (22nd of Febr. 2017) to Thursday (23rd of Febr. 2017) between 23:45 and 03:00 we will perform maintenance on our loadbalancers. ####Maintenance window 22-02-2017 23:45 - 23-02-2017 03:00 ####Description Loadbalancers will be upgraded to increase throughput and enable new capabilities. ####Impact When maintenance starts, we’ll reroute all traffic to the secondary loadbalancer. Customers that have a multi-site setup should therefore not have any service interruption.
detect invisible selection / copy buffer / chrome
In Look before you paste from a website to terminal the author rightly warns us about carelessly pasting any input from a web page into the terminal. This LookBeforePaste Chrome Extension is a quick attempt at trying to warn the user. Example output when pressing CTRL-C on the malicious code: Heuristics are defined as follows. They could certainly be improved, but it’s a start. function isSuspicious(node) { if (node.nodeType == node.
convert / dehydrated / certbot / letsencrypt config
If you find yourself in the situation that you have to reuse your Letsencrypt credentials/account generated by Dehydrated (a bash Letsencrypt interface) with the official Certbot client, like me, you’ll want to convert your config files. In my case, I wanted to change my e-mail address, and the Dehydrated client offered no such command. With Certbot you can do this: $ certbot register --update-registration --account f65c... But you’ll need your credentials in a format that Certbot groks.
mysql / deterministic / reads sql data
Can I use the MySQL function characteristic DETERMINISTIC in combination with READS SQL DATA and do I want to? TL;DR If the following two groups of statements are the same to you, you want the DETERMINISTIC characteristic on your FUNCTION, even if you have READS SQL DATA. SET @id = (SELECT my_func()); SELECT * FROM my_large_table WHERE id = @id; -- versus SELECT * FROM my_large_table WHERE id = my_func(); (All of this is tested with MySQL 5.
Availability during holiday December 2016
Starting the 17th of December we are on leave. We return to the office on the 2nd of January. During this period we are available 24/7 for incident response and other urgent matters as usual. If you already know of any urgent requests which needs to be handled during this period, please inform us in advance so we can plan the required availability.
patch-a-day / pdns-recursor / broken edns lookups
Last month, our e-mail exchange (Postfix) started having trouble delivering mail to certain destinations. These destinations all appeared to be using Microsoft Office 365 for their e-mail. What was wrong? Who was to blame? And how to fix it? The problem appeared like this: Nov 16 17:04:08 mail postfix/smtp[13330]: warning: no MX host for umcg.nl has a valid address record Nov 16 17:04:08 mail postfix/smtp[13330]: 1D1D21422C2: to=<-EMAIL-@umcg.nl>, relay=none, delay=2257, delays=2256/0.02/0.52/0, dsn=4.
patch-a-day / dovecot / broken mime parts / xenial
At times, Dovecot started spewing messages into dovecot.log about a corrupted index cache file because of “Broken MIME parts”. This happened on Ubuntu/Xenial with dovecot_2.2.22-1ubuntu2.2: imap: Error: Corrupted index cache file dovecot.index.cache: Broken MIME parts for mail UID 33928 in mailbox INBOX: Cached MIME parts don't match message during parsing: Cached header size mismatch (parts=4100...) imap: Error: unlink(dovecot.index.cache) failed: No such file or directory (in mail-cache.c:28) imap: Error: Corrupted index cache file dovecot.
tmpfs files not found / systemd
While debugging a problem with EDNS records, I wanted to get some cache info from the PowerDNS pdns-recursor. rec_control dump-cache should supply it, but I did not see it. # rec_control dump-cache out.txt Error opening dump file for writing: Permission denied Doh, it’s running as the pdns user. Let’s write in /tmp. # rec_control dump-cache /tmp/out.txt dumped 42053 records # less /tmp/out.txt /tmp/out.txt: No such file or directory Wait what? No files?
setting up powerdns slave / untrusted host
When migrating our nameserver setup to start using DNSSEC, a second requirement was to offload a resolver to somewhere off-network. You want your authoritative nameservers to be distributed both accross different geographical regions, networks and top level domains. That means, don't do this: ns1.thedomain.com - datacenter X in Groningen ns2.thedomain.com - datacenter X in Groningen Do do this: ns1.thedomain.com - datacenter X in Groningen ns2.thedomain.org - datacenter Y in Amsterdam In our case, we could use a third nameserver in a separate location: a virtual machine hosted by someone other than us.
mysql sys schema / mysqldump failure
After upgrading the mysql-server to 5.7 and enabling GTIDs, the mysql-backup script started spewing errors. Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events. (...repeated for every database schema...) mysqldump: Couldn't execute 'SHOW FIELDS FROM `host_summary`': View 'sys.
copy-pasting into java applications / x11
The other day I was rebooting our development server. It has full disk encryption, and the password for it has to be specified at boot time, long before it has network access. Even though the machine is in the same building, walking over there is obviously not an option. The machine has IPMI, like all modern machines do, so we can connect a virtual console over the local network. For that, we use the SuperMicro ipmiview tool.
packaging supermicro ipmiview / debian
Do you want to quickly deploy SuperMicro ipmiview on your desktop? IPMI is a specification for monitoring and management of computer hardware. Usually this is used for accessing servers in a data center when the regular remote login is not available. Think: hard rebooting a stuck machine, specifying the full disk encryption password at boot time, logging onto a machine where the remote login (ssh daemon) has disappeared. The SuperMicro IPMI devices have an embedded webserver, but it requires Java to access the console.
golang / statically linked
So, Go binaries are supposed to be statically linked. That’s nice if you run inside cut-down environments where not even libc is available. But sometimes they use shared libraries anyway? TL;DR: Use CGO_ENABLED=0 or -tags netgo to create a static executable. Take this example: $ go version go version go1.6.2 linux/amd64 $ go build gocollect.go $ file gocollect gocollect: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, \ interpreter /lib64/ld-linux-x86-64.
sipp / travis / osx build / openssl
A couple of days ago our SIPp Travis CI builds started failing due to missing OpenSSL include files. Between SIPp build 196 and SIPp build 215 the OSX builds on Travis started failing with the following configure error: checking openssl/bio.h usability... no checking openssl/bio.h presence... no checking for openssl/bio.h... no configure: error: <openssl/bio.h> header missing It turns out that something had changed in the build environment and OpenSSL headers and libraries were no longer reachable.
lxc / create image / debian squeeze
I’m quite happy with our LXC environment on which I’ve got various Debian and Ubuntu build VMs so I can package backports and other fixes into nice .deb packages. Today I needed an old Debian/Squeeze machine to build backports on. Step one: check the lists. $ lxc remote list +-----------------+--------------------------+---------------+-----+-----+ | NAME | URL | PROTOCOL | PUB | STC | +-----------------+--------------------------+---------------+-----+-----+ | images | https://images.linuxco...| simplestreams | YES | NO | +-----------------+--------------------------+---------------+-----+-----+ | local (default) | unix:// | lxd | NO | YES | +-----------------+--------------------------+---------------+-----+-----+ | ubuntu | https://cloud-images.
Planned maintenance 22 August 2016
In the night of Monday (August. 22nd) to Tuesday (August. 23rd) between 23:45 and 06:00 we will perform network maintenance on our core network. ####Maintenance window Monday (August. 22nd) to Tuesday (August. 23rd) between 23:45 and 06:00 (CEST) ####Description One of the core routers (CR2) at TCN will be relocated to a new rack to allow further expansion on this site and make some desired improvements in the meantime. Impact During the maintenance, routing will be adjusted to free CR2 of traffic.
HTTP_PROXY "httpoxy" vulnerability (Dutch)
Een stukje uitleg over de HTTP_PROXY vulnerability die op https://httpoxy.org/ gemeld is. Je site is vulnerable indien aan de volgende criteria wordt voldaan: Je hebt een webapplicatie die zelf webaanroepen doet; bijvoorbeeld communicatie met een payment provider of interne backend (d.m.v. python-requests, curl, http_get, etc…). De webserver laat de Proxy header door als HTTP_PROXY. Dit is standaard zo. De HTTP_PROXY header komt tussen de environment variabelen. Dit is niet zo onder bijvoorbeeld python met uwsgi, maar wel bij CGI applicaties en ook bij php-fpm en apache-mod_php!
Planned maintenance 21 July 2016 [UPDATED]
In the night of Thursday (Jul. 21st) to Friday (Jul. 22nd) between 23:00 and 04:00 we will perform network maintenance on our core network. ####Maintenance window Thursday (Jul. 21st) to Friday (Jul. 22nd) between 23:00 and 04:00 (CEST) ####Description Extra network capacity on our core network will be deployed and tested. During the maintenance, routing will be adjusted to free the links affected of any traffic. Impact No impact expected. Increased chance for short service disruptions due to changes in network.
letsencrypt / license update / show differences
This morning, Let’s Encrypt e-mailed me that the Subscriber Agreement was updated; but it had no diff. Let’s Encrypt Subscriber, We’re writing to let you know that we are updating the Let’s Encrypt Subscriber Agreement, effective August 1, 2016. You can find the updated agreement (v1.1.1) as well as the current agreement (v1.0.1) in the “Let’s Encrypt Subscriber Agreement” section of the following page: https://letsencrypt.org/repository/ Thank you for helping to secure the Web by using Let’s Encrypt.
Planned maintenance 17 June 2016
In the night of Friday (Jun. 17th) to Saturday (Jun. 18th) between 23:00 and 4:00 we will perform network maintenance on our core network. ####Maintenance window Friday (Jun. 17th 2016) to Saturday (Jun. 18th 2016) between 23:00 and 4:00. ####Description Extra network capacity on our core network will be deployed and tested. During the maintenance routing will be adjusted to allow active maintenance on certain fiber paths. Impact No impact expected.
apt / insufficiently signed / weak digest
When adding our own apt repository to a new Ubuntu/Xenial machine, I got a “insufficiently signed (weak digest)” error. # apt-get update ... W: gpgv:/var/lib/apt/lists/partial/ppa.osso.nl_ubuntu_dists_xenial_InRelease: The repository is insufficiently signed by key 4D1...0F5 (weak digest) Confirmed it with gpgv. # gpgv --keyring /etc/apt/trusted.gpg \ /var/lib/apt/lists/ppa.osso.nl_ubuntu_dists_xenial_InRelease gpgv: Signature made Wed 23 Mar 2016 10:14:48 AM UTC using RSA key ID B36530F5 gpgv: Good signature from "PPA-OSSO-NL <support+ppa@osso.nl>" # gpgv --weak-digest sha1 --verbose --keyring /etc/apt/trusted.
lxcfs - proc uptime
When removing the excess LXC and LXD package from the LXC guest and working around Ubuntu/Xenial reboot issues I noticed the lxcfs mounts on my LXC guest. (No, you don’t need the lxcfs package on the guest.) guest# mount | grep lxc lxcfs on /proc/cpuinfo type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) lxcfs on /proc/diskstats type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) lxcfs on /proc/meminfo type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) lxcfs on /proc/stat type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) lxcfs on /proc/swaps type fuse.
lxc - ubuntu xenial - reboot
The current Ubuntu/Xenial guest image on our new LXD container host contained too many packages. It held the lxd package and a bunch of lxc packages. They are not needed on the container guest. At some point before or after removing them, for some reason the ZFS container got unmounted. This went unnoticed until I tried a reboot: guest# reboot lxd# lxc exec guest /bin/bash error: Container is not running. lxd# lxc start guest error: Error calling 'lxd forkstart guest /var/lib/lxd/containers /var/log/lxd/guest/lxc.
renaming / lxd managed lxc container
Renaming an LXD managed LXC container is not straight forward. But if you want to rename the host from inside the container, you should do so on the outside as well. If you don’t, you may notice that for instance the DHCP manual IP address assignment doesn’t work as expected. Creating a new LXC container For example, we’ll create a new container called walter-old with a fresh Debian/Jessie on it.
missing sofiles / linker / asterisk / pjsip
When compiling Asterisk with a PJProject debianized using the debian/ directory to Ubuntu/Trusty, I got the following compile error: $ gcc -o chan_pjsip.so -pthread -shared -Wl,--version-script,chan_pjsip.exports,--warn-common \ chan_pjsip.o pjsip/dialplan_functions.o -lpjsua2 -lstdc++ -lpjsua -lpjsip-ua \ -lpjsip-simple -lpjsip -lpjmedia-codec -lpjmedia-videodev -lpjmedia-audiodev \ -lpjmedia -lpjnath -lpjlib-util -lsrtp -lpj -lm -lrt -lpthread \ -lSDL2 -lavformat -lavcodec -lswscale -lavutil -lv4l2 -lopencore-amrnb \ -lopencore-amrwb /usr/bin/ld: cannot find -lSDL2 /usr/bin/ld: cannot find -lavformat /usr/bin/ld: cannot find -lavcodec /usr/bin/ld: cannot find -lswscale /usr/bin/ld: cannot find -lavutil /usr/bin/ld: cannot find -lv4l2 /usr/bin/ld: cannot find -lopencore-amrnb /usr/bin/ld: cannot find -lopencore-amrwb collect2: error: ld returned 1 exit status That’s odd.
CVE-2015-7547: glibc getaddrinfo stack-based buffer overflow
On February 16, 2016 details on a vulnerability in glibc were released (CVE-2015-7547). The vulnerability is remotely exploitable and affects a lot of systems. More info will be added later when more information is available. We started emergency patch procedures for our environments and managed customer environments. Summary Classification: Critical. Remote exploitation possible. Impact: Wide impact, all services that use glibc and perform dns resolving are vulnerable. upstream description The glibc DNS client side resolver is vulnerable to a stack-based buffer overflow when the getaddrinfo() library function is used.
python / xinetd / virtualenv
So, while developing a server application for a client, my colleague Harm decided it would be a waste of our programming time to add TCP server code. Inetd and friends can do that really well. The amount of new connects to the server would be minimal, so the overhead of spawning a new Python process for every connect was negligible. Using xinetd as an inetd server wrapper is simple. The config would look basically like this:
Planned maintenance 13 Feb 2016
In the night of Friday (Feb. 12th) to Saturday (Feb. 13th) between 1:00 and 2:00 our co-location provider will perform maintenance on the PDU’s in one of our racks. Customers we consider to be directly affected (bare metal servers) will receive an additional notification. Outside of that it’s mostly OSSO infrastructure services that are affected. ####Maintenance window 01:00-02:00 on 13th of Februari 2016. ####Description The PDU’s of one of the racks will be taken into service one by one (A and B feed).
salt master losing children
I recently set up psdiff on a few of my servers as a basic means to monitor process activity. It disclosed that my SaltStack master daemon — which I’m running as a non-privileged user — was losing a single child, exactly 24 hours after I had ran salt commands. This seemed to be a recurring phenomenon. The salt server — version 0.17.5+ds-1 on Ubuntu Trusty — was running these processes:
polyglot xhtml
Polyglot XHTML: Serving pages that are valid HTML and valid XML at the same time. A number of documents have been written on the subject, which I shall not repeat here. My summary: HTML5 is not going away. XHTML pages validate in the browser. If you can get better validation during the development of your website, then you’ll save yourself time and headaches. Thus, for your development environment, you’ll set the equivalent of this:
Availability during holiday december 2015
From 24th of December we are on leave and return to the office on the 4th of January. During this period we are available 24/7 for incident response and other urgent matters as usual. If you already know of any urgent requests which needs to be handled during this period, please inform us in advance so we can plan the required availability.
Planned maintenance - router upgrade RUG RH POP (01:00-04:00 8 DEC 2015)
In the night of Monday (Dec. 7th) to Tuesday (Dec. 8th) between 1:00 and 6:00 we will upgrade the router at our RUG Rekenhal POP. Impact is limited to IP Access locations and IP Transit customers on this POP. ####Maintenance window 01:00-04:00 on 8th of December 2015. ####Description We will upgrade the router to allow planned network upgrades. Impact The router at the RUG Rekenhal POP will be unavailable for 30min-60min.
asterisk / editline / key bindings
Getting the Asterisk PBX CLI to work more like you’re used to from the (readline) bash shell can, be a time-saver. For example, you may want reverse-i-search (^R), backward word deletion (^W) and word skipping (^<arrow-left> and ^<arrow-right>). It can be done, but you must configure the editline library in a similar manner as you would configure .inputrc. Support for the .editrc configuration file was added in May 2011 (git commit d508a921).
Planned maintenance virtual servers TCN (01:00-06:00 27 NOV 2015)
In the night of Thursday (Nov. 26th) to Friday (Nov. 27th) between 1:00 and 6:00 we will perform maintenance to the virtual server infrastructure at location TCN. ####Maintenance window 01:00-06:00 on 27th of November 2015 ####Description Virtual server infrastructure will be upgraded to a new major software release. Due to the major version upgrade and incompatibility between versions servers will experience downtime for 15-60 minutes Impact Virtual servers will be offline for ~15 minutes (up to 60 minutes worst case).