Migrating a virtual machine from one host to another is usually no big deal if both hosts run the same VMM. But what if one wants to move a VM from a host running a different hypervisor than the target host? In the case of moving a VM from KVM to ESXi that’s just not (easily) possible. However, one can convert the virtual hard drive and recreate the virtual machine on the target host, which should be good enough for most use cases.
The achievable speed of copying a file with OpenSSH (e.g. with
scp) will depend on quite a few different factors such as CPU speed, CPU architecture, Network throughput, OpenSSH implementation, OS, hard drive speed, etc. But how much of a difference does choosing a different cipher algorithm make? And what’s the fastest OpenSSH cipher algorithm?
Turns out, there’s no simple answer to this question, since most of the factors that influence the transfer speed can be ruled out, but the results will at least depend on the hardware platform and OpenSSH version. There’s quite a few different benchmarks out there, e.g. for the Bifferbord, E5 xeon CPUs or different consumer grade CPUs and ARM processors. But since the results are so heavily platform dependent, it’s a good idea to run your own benchmark on the particular platform you are interested in. So here’s another data point for an Intel Xeon E5-2640 and OpenSSH 6.9p1 (OpenSSL 1.0.1k).
The test setup is quite similar to the one described at blog.famzah.net. The bash script used to produce the data is:
for cipher in aes128-ctr aes192-ctr aes256-ctr arcfour256 arcfour128 aes128-cbc 3des-cbc blowfish-cbc cast128-cbc aes192-cbc aes256-cbc arcfour do echo "$cipher" for try in 1 2 do scp -c "$cipher" /tank/fs/testfile.img root@localhost:/tank2/fs/ done done
The test file consists of 5GiB random data. Both the source and target file system are RAM backed to remove the influence of HDD read and write speeds. In addition to that, the test file is written to localhost to ensure that network speed, load and NIC drivers do not influence the test results.
The results clearly show, that the Xeon’s AES instruction set is used. Most modern x86 CPUs do come with this extension these days.
While this data clearly suggests, that AES encryption is the faster cipher OpenSSH cipher (if there is hardware support for it as in this case), copying large amounts of data with
scp is not a particularly interesting use case. Sending big streams of data through a pipe into ssh, as you do when you send and receive ZFS snapshots over ssh, is a very common application. For benchmarking reasons, sending actual ZFS snapshots is not ideal, since ZFS takes some extra time to check the receiving file system (and its snapshots) before starting the sending process. So here’s an altered script that should tell us, what the fastest cipher for that particular use case is:
for cipher in aes128-ctr aes192-ctr aes256-ctr arcfour256 arcfour128 aes128-cbc 3des-cbc blowfish-cbc cast128-cbc aes192-cbc aes256-cbc arcfour do echo "$cipher" for try in 1 2 do cat /tank/fs/testfile.img | pv | ssh -c "$cipher" root@localhost "cat - > /dev/null" done done
The only difference can be found in the highlighted line: Instead of using
scp the file is now piped directly into ssh and discarded on the receiving side. Again, the 5GiB test file lives on a RAM backed file system and the transfer is done to localhost.
Spawning a tomcat server instance from within Netbeans is really handy for rapid Java Servlet or JavaServer Pages application development. Since log levels are usually quite verbose during development, logs tend to pile up. So you might want to clean out the log directory from time to time. Or maybe you just want to go through one of those logs one more time with a proper editor.
Here’s Nebeans’ default tomcat log file storage location:
So for a current Netbeans with a recent tomcat version this could be something like
Especially for performance benchmarks it can be quite handy to have a zfs pool that’s not limited by the speed of the underlying hard drives or other block devices (like iSCSI or fibre channel). The Linux kernel has a nice block device driver that let’s you create virtual block devices that are RAM backed. To list the available options, use
# modinfo brd [...] parm: rd_nr:Maximum number of brd devices (int) parm: rd_size:Size of each RAM disk in kbytes. (int) parm: max_part:Num Minors to reserve between devices (int)
To create three virtual block devices with a size of 2GiB each for example, load the
brd module with the following options
# modprobe brd rd_nr=3 rd_size=2097152
which will create three devices named
# ls -lah /dev/ram* brw-rw---- 1 root disk 1, 0 Apr 30 00:34 /dev/ram0 brw-rw---- 1 root disk 1, 1 Apr 30 00:34 /dev/ram1 brw-rw---- 1 root disk 1, 2 Apr 30 00:34 /dev/ram2
Note that the default value for the
rd_nr parameter is 16, which would result in 16
/dev/ramN devices being created. However, the memory is still available until those virtual block devices are actually used.
Creating a zfs pool on these RAM backed block devices works just as with any other block device:
# zpool create tank ram0 ram1 ram2 # zpool status pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 ram0 ONLINE 0 0 0 ram1 ONLINE 0 0 0 ram2 ONLINE 0 0 0 errors: No known data errors # zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT tank 5,95G 224K 5,95G - 0% 0% 1.00x ONLINE -
Reading from and writing to a filesystem on this RAM backed pool should be quite fast
# zfs create tank/fs # dd if=/dev/zero of=/tank/fs/testfile.img bs=1M count=5k 5120+0 records in 5120+0 records out 5368709120 bytes (5,4 GB) copied, 2,51519 s, 2,1 GB/s
Since the actual performance (throughput as well as IOPS) is heavily depending on the actual hardware, your mileage may vary here, of course. Please also keep in mind that writing zeros to a file with
dd is a quick and easy way to get a first ballpark number, it is not a proper performance benchmark however. You might want to have a look at bonnie++ et al. for that.
To temporarily change a console’s keyboard mapping there’s loadkeys, a little user space program that allows you to alter the kernel’s keyboard mapping.
Loading the very handy US International keyboard layout, use
# loadkeys us-intl
To list the currently used keyboard layout or all available keyboard layouts, localectl can be used
# localectl status System Locale: LANG=en_US.UTF-8 VC Keymap: us-altgr-intl X11 Layout: us # localectl list-keymaps [...]
localectl is not available, keyboard mapping files are usually found at
/lib/kbd/keymaps/xkb/ (e.g. Fedora) or
/usr/share/kbd/keymaps/ (e.g. Arch Linux).
To permanently change the default keyboard layout system-wide, alter
With Java programs it’s quite common to combine several classes into one JAR archive. Java libraries are typically distributed this way as well.
On Linux platforms, people are quite used to using command line programs, but sometimes it’s handy to distribute a java program as an executable file that can be run by a simple double-click instead of opening a terminal and typing
java -jar FancyProgram.jar. Of course, one could always configure the desktop environment to associate JAR files with the corresponding executable from the Java Runtime Environment, but adding the JAR archive as a payload to a common shell script is much more universal.
Here’s a small stub of code that will launch the Java interpreter (i.e. the binary called
java) with itself as the JAR file to run.
#!/bin/sh MYSELF=`which "$0" 2>/dev/null` [ $? -gt 0 -a -f "$0" ] && MYSELF="./$0" java=java if test -n "$JAVA_HOME"; then java="$JAVA_HOME/bin/java" fi exec "$java" $java_args -jar $MYSELF "$@" exit 1
To add the original JAR archive as payload and make the resulting file executable, run
cat stub.sh FancyProgram.jar > FancyProgram && chmod +x FancyProgram
You can now execute the resulting file with
Binary payloads in shell scripts also allow you do distribute entire software packages that could easily consist of hundreds of files as a single shell script, as described in a great article from linuxjournal.com. To wrap JAR archives in native Windows executables, have a look at http://launch4j.sourceforge.net.
Up to version 5.5 a vCenter appliance was usually deployed by importing the corresponding ovf template that could be downloaded from the VMware website. That process changed with version 6.0 since there is no longer an ovf template. Instead, VMWare provides and ISO image that contains the necessary data and tools to deploy a vCenter appliance, even directly from the command line.
After downloading the ISO image file from the my vmware portal mount it, e.g. to
# mount -o loop /var/lib/libvirt/images/VMware-VCSA-all-6.0.0-3343019.iso /mnt/
The vcsa command line deployment tool can be found at
vcsa-cli-installer/lin64/vcsa-deploy. Since the available options and arguments to this tool are tucked away in one of the many vCenter documentation pdfs, here’s the output of
$ vcsa-cli-installer/lin64/vcsa-deploy --help usage: vcsa-deploy install [-h] [--template-help] [-v] [-t] [--log-dir LOG_DIR] [--verify-only] [--skip-ovftool-verification] [--no-esx-ssl-verify] [--sso-ssl-thumbprint SSL-SHA1-THUMBPRINT] [--accept-eula] template Deploy vCSA to a remote host. optional arguments: -h, --help Show this help message and exit. Other Arguments: --template-help Print out the help for template settings. -v, --verbose Debug information will be displayed in the console. If you set this parameter, you cannot set --terse. -t, --terse Only warning and error information will be displayed in the console. If you set this paramter, you cannot set --verbose. --log-dir LOG_DIR Directory for log and other output files. --verify-only Perform only the basic template verification and OVF Tool parameter verification, but do not deploy the vCenter Server Appliance. --skip-ovftool-verification Deploy the vCenter Server Appliance directly through OVF Tool without performing parameter verification. Basic template verification will still be performed. --no-esx-ssl-verify Skip the SSL verification for ESXi connections. --sso-ssl-thumbprint SSL-SHA1-THUMBPRINT Validates server certificate against the supplied SHA1 thumbprint. --accept-eula Accept the end-user license agreement. This argument is required to deploy the appliance. Required Arguments: template Path of a JSON file that describes the vCenter Server Appliance deployment procedure. Use --template-help for a list of template settings. The exit codes and their meanings are: 0: Command ran successfully. 1: Runtime error. 2: Validation error.
You can find sample json templates for the deployment in
vcsa-cli-installer/templates/install/. The options should be quite self-explanatory and cover
A comprehensive list of valid parameters of the json file is available as well by invoking
vcsa-cli-installer/lin64/vcsa-deploy --template -h
The deployment.option parameter specifies, how much virtual harware (CPUs, RAM) should be allocated for the vCenter appliance. Here’s a table of the available options (taken from the VMware vSphere 6.0 Documentation Center)
vCenter Server Appliance size
|Option||max. hosts||max. VMs||appliance CPUs||appliance Memory|
Note that the hostname parameter in the network section needs to have a forward and reverse DNS entry (see VMware vCenter server 6 deployment guide) to work. An IP address is also fine though.
After editing the json file to reflect your configuration you can deploy the vCenter appliance by running
vcsa-cli-installer/lin64/vcsa-deploy path_to_config_file.json --accept-eula
# vcsa-cli-installer/lin64/vcsa-deploy ~/vcenter.json --accept-eula Performing basic template verification... Starting vCenter Server Appliance installer to deploy "vCenter-Server-Appliance"... This appliance is a vCenter Server instance with an embedded Platform Services Controller. See /tmp/vcsaCliInstaller-2016-03-08-15-06-xXUT2j/vcsa-cli-installer.log for the installer logs. Run the installer with "-v" or "--verbose" to log detailed information Running OVF Tool to deploy the OVF... Opening vCenter Server Appliance image: /mnt/vcsa/vmware-vcsa Opening VI target: vi://root@esxihost:443/ Deploying to VI: vi://root@esxihost:443/ Progress: 99% Transfer Completed Powering on VM: vCenter-Server-Appliance Progress: 98% Power On completed. Waiting for IP address... Received IP address: 172.30.0.23 Installing services... vCSA firstboot: Progress: 5% Setting up storage vCSA firstboot: Progress: 50% Installing RPMs vCSA firstboot: Progress: 55% Installed oracle-instantclient11.2-odbc-220.127.116.11.0.x86_64.rpm vCSA firstboot: Progress: 63% Installed rvc_1.4.0-3196809_x86_64.rpm vCSA firstboot: Progress: 64% Installed VMware-rhttpproxy-6.0.0-3339084.x86_64.rpm vCSA firstboot: Progress: 65% Installed vmware-certificate-server-18.104.22.1687-3242066.x86_64.rpm vCSA firstboot: Progress: 66% Installed vmware-identity-sts-22.214.171.12460-3208448.noarch.rpm vCSA firstboot: Progress: 67% Installed VMware-cis-license-6.0.0-3242064.x86_64.rpm vCSA firstboot: Progress: 70% Installed vmware-esx-netdumper-6.0.0-0.0.2981910.i386.rpm vCSA firstboot: Progress: 73% Installed VMware-Postgres-126.96.36.199-2921310.x86_64.rpm vCSA firstboot: Progress: 77% Installed VMware-Postgres-plpython-188.8.131.52-2921310.x86_64.rpm vCSA firstboot: Progress: 79% Installed VMware-Postgres-client-jdbc-184.108.40.206-2921310.noarch.rpm vCSA firstboot: Progress: 80% Installed VMware-invsvc-6.0.0-3242064.x86_64.rpm vCSA firstboot: Progress: 81% Installed VMware-vpxd-6.0.0-3339084.x86_64.rpm vCSA firstboot: Progress: 81% Installed VMware-vpxd-client-6.0.0-3339084.x86_64.rpm vCSA firstboot: Progress: 83% Installed VMware-vpxd-vctop-6.0.0-3339084.x86_64.rpm vCSA firstboot: Progress: 84% Installed VMware-cloudvm-vimtop-6.0.0-3339084.x86_64.rpm vCSA firstboot: Progress: 85% Installed ipxe-1.0.0-1.2882051.vmw.i686.rpm vCSA firstboot: Progress: 86% Installed vmware-autodeploy-6.0.0-0.0.3253919.noarch.rpm vCSA firstboot: Progress: 86% Installed VMware-sps-6.0.0-3339084.x86_64.rpm vCSA firstboot: Progress: 87% Installed VMware-vdcs-6.0.0-3242353.x86_64.rpm vCSA firstboot: Progress: 89% Installed VMware-vsanmgmt-6.0.0-0.1.3339084.x86_64.rpm vCSA firstboot: Progress: 90% Installed vmware-vsm-6.0.0-3339084.x86_64.rpm vCSA firstboot: Progress: 90% Installed vsphere-client-6.0.0-3338001.noarch.rpm vCSA firstboot: Progress: 91% Installed VMware-perfcharts-6.0.0-3339084.x86_64.rpm vCSA firstboot: Progress: 95% Configuring the machine Services installations succeeded. Configuring services for first time use... vCSA firstboot: Progress: 3% Starting VMware Authentication Framework... vCSA firstboot: Progress: 10% Starting VMware Identity Management Service... vCSA firstboot: Progress: 17% Starting VMware Component Manager... vCSA firstboot: Progress: 20% Starting VMware License Service... vCSA firstboot: Progress: 24% Starting VMware Platform Services Controller Client... vCSA firstboot: Progress: 27% Starting VMware Service Control Agent... vCSA firstboot: Progress: 31% Starting VMware vAPI Endpoint... vCSA firstboot: Progress: 34% Starting VMware System and Hardware Health Manager... vCSA firstboot: Progress: 37% Starting VMware Appliance Management Service... vCSA firstboot: Progress: 44% Starting VMware Common Logging Service... vCSA firstboot: Progress: 48% Starting VMware Postgres... vCSA firstboot: Progress: 55% Starting VMware Inventory Service... vCSA firstboot: Progress: 58% Starting VMware Message Bus Configuration Service... vCSA firstboot: Progress: 63% Starting VMware vSphere Web Client... vCSA firstboot: Progress: 64% Starting VMware vSphere Web Client... vCSA firstboot: Progress: 65% Starting VMware vSphere Web Client... vCSA firstboot: Progress: 68% Starting VMware ESX Agent Manager... vCSA firstboot: Progress: 72% Starting VMware vSphere Auto Deploy Waiter... vCSA firstboot: Progress: 75% Starting VMware vSphere Profile-Driven Storage Service... vCSA firstboot: Progress: 79% Starting VMware Content Library Service... vCSA firstboot: Progress: 82% Starting VMware vCenter Workflow Manager... vCSA firstboot: Progress: 89% Starting VMware vService Manager... vCSA firstboot: Progress: 93% Starting VMware Performance Charts... vCSA firstboot: Progress: 96% Starting vsphere-client-postinstall... First time configuration succeeded. vCenter Server Appliance installer finished deploying "vCenter-Server-Appliance". This appliance is a vCenter Server instance with an embedded Platform Services Controller. System Name: 172.30.0.97 Log in as: Administrator@vsphere.local Finished successfully.
You should now be able to long into the vSphere Web Client with Administrator@vsphere.local as username and the password you specified in the json file
You can safely ignore the warning about the browser-OS combination.
It can be very handy sometimes to tunnel your browser’s traffic through a secure channel, for example when you are on an insecure or unknown network like a hotel, cafe or airport etc.
To open up a SOCKS proxy on port 8080, run
ssh -C2qTnN -D 8080 firstname.lastname@example.org
To configure Firefox to use the proxy go to Edit → Preferences → Advanced → Network → Settings and enable ‘Manual proxy configuration’
You can also tunnel Firefox’s DNS queries through the SOCKS proxy by enabling the ‘Remote DNS’ checkbox.
For chrome, you can use the settings dialog quite similar to the Firefox example above, but you can also specify the proxy through the command line with the
SOCKS_SERVER environment variable. To spawn a new, temporary chrome session with the SOCKS proxy configured, run
SOCKS_SERVER=localhost:8080 google-chrome --user-data-dir=/tmp/chrome $1
Note that’s it’s a bit more tricky to tell chrome not to rely on local DNS queries. For details have a look at the chromium documentation.
Sometimes it is a good idea or even necessary to have a local mirror of OmniOS available, i.e. if you do not want to allow your severs direct access to the outside world. Setting up a local OmniOS repository is rather simple.
1. Create a local package repo
To create an empty repo, run pkgrepo:
pkgrepo create /path/to/repo
2. Grab packages from remote repo
To mirror a remote repository to the newly created local repository, you can use:
pkgrecv -s http://pkg.omniti.com/omnios/r151014/ -d /path/to/repo '*'
You could, of course, also restrict it to individual packages or exclude certain packages.
3. Update the local repository
Updating the local repository is essentially the same as downloading it. Re-run
pkgrecv and new packages will be fetched. Don’t forget to run
refresh on the repo afterwards to catalog any new packages found in the repository and update search indexes:
# pkgrecv -s http://pkg.omniti.com/omnios/r151014/ -d /path/to/repo omnios '*' # pkgrepo -s /path/to/repo refresh
4. Add the local repository as a publisher
You need to tell your server to use your local repository instead of the upstream one:
# pkg set-publisher -G '*' -g file:///path/to/repo/ omnios
For a more comprehensive documentation of the available options to
set-publisher have a look at the ‘Configuring Publishers’ page at Oracle.
5. Refresh publisher metadata and install packages
After refreshing the publisher metadata you are ready to install packages from your local repository
# pkg refresh --full # pkg install <packagename>
6. A note on mirroring ms.omniti.com
Creating a mirror of
ms.omniti.com works the same as any other repository. Be sure, however, to use the
-m all-versions flag when downloading the packages into the local repo:
# pkgrepo create /path/to/ms.omniti.com/ # pkgrecv -s http://pkg.omniti.com/omniti-ms/ -d file:///mnt/rpm-repo/omnios/ms.omniti.com/ -m all-versions '*'
More options to
pkgrecv can be found at Oracle’s pkgrecv manpage. To enable the local
ms.omniti.com repository on your machine, run
# pkg set-publisher -G '*' -g file:///path/to/ms.omniti.com/ ms.omniti.com
Resources and further reading:
Just a quick memory hook on how to update an OmniOS release…
Update the publisher to point to the new release,
r151014 being the release version to update to in this case:
pkg set-publisher -G '*' -g http://pkg.omniti.com/omnios/r151014/ omnios
To update the client’s list of available packages and publisher metadata, run
pkg refresh --full
The actual update process is invoked with
pkg update -v --be-name=omnios-r151014 entire
If you happen to have use zones, the process is a little more sophisticated though.