This site serves as an open notebook for some of my thoughts. Caveat lector.
Heard on the Note to Self podcast:
Back in 2007, they tried a system that rewarded the people who spent the least amount of time on their site. Harris explains: “If I was going to Paris and I was staying there for four days, they would estimate how many hours would happen in those four days between me and the person who hosts me in Paris. And then they would ask both people, ’How positive were those hours? Did you have a good time together?’ So they’re getting kind of a count of the number of positive hours. And then what they do is subtract all of the time that both people spent on Couchsurfing’s website. They take that as a cost to people’s lives. ’Cause having people search and send messages and look at profiles, they don’t view that as a contribution that’s positive to people’s lives. And what you’re left with is just these new net positive hours that would have never existed if Couchsurfing didn’t exist.”
There’s something I find really compelling about this metric as it has some really nice qualities. It’s readily understandable, easy to calculate, hard to game (accidentally or otherwise for whatever reason), and doesn’t have any obvious failure modes where it would actually cause the wrong thing to happen.
On the drawbacks front, it’s obviously not totally precise. While one can argue that it is overly dependent on the fidelity of reports from the individuals that had the experience, but at the same time, perhaps their recollection of the number of positive hours is actually more important than the exact clock time.
For a couple of years now I’ve been using a hardware token to store SSH private keys. The Yubikey NEO is a really convenient device with support for both U2F and OpenSSH secrets. On the hardware side of things the device is compact, lightweight and durable. Setting up the software side of things required far more trial and error than it should but once configured has remained reliable. This post isn’t a step by step guide but instead is a place to record the trickier details of getting this set up.
This part was largely directed by Simon Josefsson’s post Offline GnuPG Master Key and Subkeys on YubiKey NEO Smartcard which is best followed directly rather than trying to reproduce it here. The basic steps are:
- Generate a master key
- Create subkeys for signing, encryption and authentication
- Move the three subkeys to the Yubikey
Handling multiple smartcards
Almost all of my machines have multiple smartcard readers or TPMs. GPG’s
scdaemon gets confused by these when they don’t talk OpenSSH but doesn’t make it entirely clear that’s what’s going on. To rememdy this, depending on platform, in
~/.gnupg/scdaemon.conf, add the line:
reader-port "Yubico Yubikey NEO U2F+CCID 0"
SSH support (Linux, OS X)
On Linux and OSX this provides a drop-in replacement for a usual SSH agent. In
~./gnupg/gpg-agent.conf, add the following:
SSH support (Windows)
%HOME%\AppData\Roaming\gnupg\gpg-agent.conf, add the line:
This provides support for PuTTY where it replaces the role Pageant plays. More recently I prefer to use Mosh which has much better support for intermittent connectivity and higher latency connections. Mosh is easily used from inside a Cygwin environment but for various reasons there isn’t a Cygwin-native
ssh-pageant can be installed from Cygwin’s setup tool and will automatically connect to the Pageant emulation provided by
In the Cygwin
~/.bash_profile add the following:
# ssh-pageant eval $(/usr/bin/ssh-pageant -r -a "/tmp/.ssh-pageant-$USERNAME")
Bonus: Stub generation
Usually this happens automatically but in some cases a new machine may need to be prompted to generate stubs for the keys on the Yubikey. The are simply pointers to the secure keys that remain on the device. Reference.
gpg-connect-agent learn /bye
In my homelab I run a variety of hypervisors including Hyper-V Server 2012 R2 which is both fully featured (as a hypervisor) and very spartan (in terms of UI). Any server configuration changes are made through the command line either with Powershell or cmd.
Recently I’ve been working through setting up a separate network configured to used jumbo frames for higher throughput between some multi-NICed servers as the storage server. Larger packets mean less overhead while transferring large quantities of data because the content:header ratio is reduced, the only drawback is that not all devices and switches support larger frames.
The following steps describe the process for enabling jumbo packets on the Hyper-V host.
Enable the network adapter
This step is probably hardware dependent. In my case, the Supermicro board has Intel 82579LM and 82574L NICs and for the
Ethernet 3 interface, the configuration option was called
Get-NetAdapterAdvancedProperty can be used to find the correct configuration name.
Set-NetAdapterAdvancedProperty -Name "Ethernet 3" -DisplayName "Jumbo Packet" -DisplayValue "9014 Bytes"
Enable the virtual switch
With the driver for the physical hardware updated, the next step is to change the configuration on the virtual switch. Here
vEthernet (Storage) may be different depending on how your switches are set up.
netsh interface ip set subinterface "vEthernet (Storage)" mtu=9000 store=persistent
Enable inside the VM.
Finally the guest OS needs to be configured to use an MTU of 9000.
In my case I had a CoreOS guest which is configured with a
cloud-config file. In my case the NIC connected to the
vEthernet (Storage) virtual switch surfaces as
00-eth1.link configuration sets the MTU appropriately and the chaser udev rules trigger is a workaround for Bug #174. The
00-eth1.network sets a static IP on the interface since I have no DHCP server running on this subnet.
Here are the relevant changes to
coreos: ... units: ... - name: 00-eth1.link runtime: true content: | [Match] Name=eth1 [Link] MTUBytes=9018 - name: 00-eth1.network runtime: true content: | [Match] Name=eth1 [Network] Address=10.180.1.2/24 - name: systemd-udev-trigger.service command: restart
Proxmox provides a convenient platform for hosting OpenVZ containers and full virtual machines on a single host. One challenge I ran into was trying to mount NFS volumes from inside a container which led to the head-scratching error:
mount: unknown filesystem type 'nfs'
Fortunately there is an easy fix which is actually quite well documented on the OpenVZ site. For container 101, the following commands will update the container configuration which quickly gets things back on track.
vzctl stop 101 vzctl set 101 --features "nfs:on" --save vzctl start 101
It’s nice to have three monitors for a lot of screen real estate but the lights from the screens can often be overwhelming, especially at night. I installed a 3 x 10-inch LED strip behind the monitors which give a nice ambient light behind the screens.
I wrote a very simple app, vera-monitor-backlight, that runs in the background and watches system monitor state in Windows 8+. When the monitors turn off, it sends a command to Vera to turn off the light. When the monitors come back on, the backlight is turned on again. The code is hosted on Github.
Tested with a VeraLite home automation controller. You will likely need to change the
IP address and
DeviceNum parameters in the
OffUrl variables to match your setup.
Also serves as an example of how to use
POWERBROADCAST_SETTING in C#.
Older posts can be found in the archives.