Enabling ‘reserved host RAM’ on VMWare Wokstation Player

VMware’s Workstation Player checks how much swap space is available before starting up any virtual machine. If the host’s available swap space isn’t at lest 50% of the VM’s memory it spits out a warning:


VMWare Workstation Player showing error message due to too little swap being available

Unfortunately the GUI does not offer an option to change this behavior and disable memory overcommitment. However this can be done by adding prefvmx.minVmMemPct = “100” to /etc/vmware/config:

[...]
prefvmx.minVmMemPct = "100"

Note that this option has to be set globally in /etc/vmware/config and does not work in a virtual machine’s *.vmx file or on a per-user basis in ~/.vmware/preferences.

Converting a disk image to VHD for Azure

      No Comments on Converting a disk image to VHD for Azure

Currently the Fedora projects provides cloud images as qcow2 and raw disk files. Microsoft’s Azure however only supports VHD files. Fortunately qemu-img can convert between those formats:

qemu-img convert -f qcow2 -o subformat=fixed,force_size -O vpc \
  Fedora-Cloud-Base-27-1.6.x86_64.qcow2 \
  Fedora-Cloud-Base-27-1.6.x86_64.vhd

Note that the subformat options fixed and force_size are required for Azure to be able to use the disk image since Azure only supports fixed sized disks.

Running Fedora 27 on Google Compute Engine

      2 Comments on Running Fedora 27 on Google Compute Engine

Usually Linux distributions with a long life cycle like RHEL (or its free derivative CentOS), Debian or SLES are the way to go for virtual machines in a cloud environment. But sometimes you need to be a little bit closer to upstream. Maybe because your applications relies on newer version of some packages that are not (easily) available on distributions with long term support or maybe because you need a feature that has just not yet made it to RHEL, Debian or SLES.
In those cases, Fedora is an interesting choice, since it’s probably the Linux distribution that’s closest to upstream and provides the most features that could be considered ‘bleeding edge’. Unfortunately there’s currently no publicly available Fedora image on the Google Cloud Platform. But not to worry, it’s quite easy to run Fedora 27 on GCE.

The Fedora Project provides a compressed raw disk image that can be used to spawn VMs on different platforms, e.g. GCE. To use it with the Google Compute Engine, the image has to be renamed and repackaged though:

wget https://dl.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.raw.xz
xz --decompress Fedora-Cloud-Base-27-1.6.x86_64.raw.xz
mv Fedora-Cloud-Base-27-1.6.x86_64.raw disk.raw
tar cfz Fedora-Cloud-Base-27-1.6.x86_64.tar.gz disk.raw --sparse
rm disk.raw

The image can now be uploaded into a Google Cloud Storage bucket:

gsutil mb gs://fedora-cloud-base-27
gsutil cp Fedora-Cloud-Base-27-1.6.x86_64.tar.gz gs://fedora-cloud-base-27/

Now, we can create an image and use that to spawn a GCE instance:

gcloud compute images create --source-uri gs://fedora-cloud-base-27/Fedora-Cloud-Base-27-1.6.x86_64.tar.gz fedora-cloud-base-27
gcloud compute instances create fedora27 --machine-type f1-micro --image fedora-cloud-base-27 --zone us-east1-b

Of course, you might want to choose a different machine-type or zone here.

Once the VM is booted (and assuming a metadata key value pair for the project provides a public ssh key) one can connect to the instance via:

gcloud compute ssh fedora@fedora27

Allow blacklisted VMWare Workstation Player graphics driver

For some reason, VMWare decided to blacklist some graphics drivers for their VMware Workstation Player. That includes the Mesa DRI drivers for most Intel IGPs, which results in unbearably slow graphic performance and potentially error messages such as “Hardware graphics acceleration is not available” or “No 3D support is available from the host” when starting a virtual machine

VMWare Workstation Player showing error message due to blacklisted driver

To enable hardware 3D acceleration for blacklisted drivers, the option mks.gl.allowBlacklistedDrivers needs to be enabled:

...
mks.gl.allowBlacklistedDrivers = TRUE

This can either be done globally in /etc/vmware/config, on a per-user basis in ~/.vmware/preferences or for each individual VM in the corresponding .vmx file.

Arduino development with Eclipse

      No Comments on Arduino development with Eclipse

The Arduino IDE is great for beginners: It makes it really easy to write simple programs without having to care about compiler options, include paths, language standards or how to actually flash firmware onto the microcontroller. It even comes with a built-in serial monitor which can be a great tool for debugging.
Inevitably, as people dive deeper into the world of microcontrollers and the Arduino platform specifically, they usually want more control of the toolchain and use features like referencing source code across different projects and include external libraries. While that’s technically all possible with the Arduino IDE, some of the things might be a bit clunky to set up. IDEs like Eclipse are much more suited for these use cases.

Continue reading

Updating BIOS firmware via iPXE

      No Comments on Updating BIOS firmware via iPXE

These days mainboards usually come with some sort of wizbang tool that allow the user to update the BIOS from a USB drive or straight via network. Except, of course, that one single mainboard that absolutely needs a new BIOS version on a late Friday afternoon. And obviously the manufacturer only provides a flash tool for DOS and the mainboard is not supported by flashrom yet.
In those cases booting FreeDOS can be really handy. Booting FreeDOS via PXE is not that hard and it can also be booted via iPXE quite easily. If you do boot it via PXE, the easiest way to access the mainboard manufacturer’s flash tool and the new BIOS firmware from within FreeDOS is probably to include it in the PXE image file (see here). With iPXE however there’s a much more elegant way…

Continue reading

Migrating a virtual machine from KVM to ESXi

      No Comments on Migrating a virtual machine from KVM to ESXi

Migrating a virtual machine from one host to another is usually no big deal if both hosts run the same VMM. But what if one wants to move a VM from a host running a different hypervisor than the target host? In the case of moving a VM from KVM to ESXi that’s just not (easily) possible. However, one can convert the virtual hard drive and recreate the virtual machine on the target host, which should be good enough for most use cases.

Continue reading

OpenSSH cipher performance

      No Comments on OpenSSH cipher performance

The achievable speed of copying a file with OpenSSH (e.g. with scp) will depend on quite a few different factors such as CPU speed, CPU architecture, Network throughput, OpenSSH implementation, OS, hard drive speed, etc. But how much of a difference does choosing a different cipher algorithm make? And what’s the fastest OpenSSH cipher algorithm?

Turns out, there’s no simple answer to this question, since most of the factors that influence the transfer speed can be ruled out, but the results will at least depend on the hardware platform and OpenSSH version. There’s quite a few different benchmarks out there, e.g. for the Bifferbord, E5 xeon CPUs or different consumer grade CPUs and ARM processors. But since the results are so heavily platform dependent, it’s a good idea to run your own benchmark on the particular platform you are interested in. So here’s another data point for an Intel Xeon E5-2640 and OpenSSH 6.9p1 (OpenSSL 1.0.1k).

The test setup is quite similar to the one described at blog.famzah.net. The bash script used to produce the data is:

for cipher in aes128-ctr aes192-ctr aes256-ctr arcfour256 arcfour128 aes128-cbc 3des-cbc blowfish-cbc cast128-cbc aes192-cbc aes256-cbc arcfour
do
   echo "$cipher"
   for try in 1 2
   do
      scp -c "$cipher" /tank/fs/testfile.img root@localhost:/tank2/fs/
   done
done

The test file consists of 5GiB random data. Both the source and target file system are RAM backed to remove the influence of HDD read and write speeds. In addition to that, the test file is written to localhost to ensure that network speed, load and NIC drivers do not influence the test results.

SCP file transfer speed

The results clearly show, that the Xeon’s AES instruction set is used. Most modern x86 CPUs do come with this extension these days.

While this data clearly suggests, that AES encryption is the faster cipher OpenSSH cipher (if there is hardware support for it as in this case), copying large amounts of data with scp is not a particularly interesting use case. Sending big streams of data through a pipe into ssh, as you do when you send and receive ZFS snapshots over ssh, is a very common application. For benchmarking reasons, sending actual ZFS snapshots is not ideal, since ZFS takes some extra time to check the receiving file system (and its snapshots) before starting the sending process. So here’s an altered script that should tell us, what the fastest cipher for that particular use case is:

for cipher in aes128-ctr aes192-ctr aes256-ctr arcfour256 arcfour128 aes128-cbc 3des-cbc blowfish-cbc cast128-cbc aes192-cbc aes256-cbc arcfour
do
   echo "$cipher"
   for try in 1 2
   do
      cat /tank/fs/testfile.img | pv | ssh -c "$cipher" root@localhost "cat - > /dev/null"
   done
done

The only difference can be found in the highlighted line: Instead of using scp the file is now piped directly into ssh and discarded on the receiving side. Again, the 5GiB test file lives on a RAM backed file system and the transfer is done to localhost.

SSH piped file transfer speed