Okular uses ridiculous amounts of memory

      1 Comment on Okular uses ridiculous amounts of memory

Especially on large pdf files, okular tends to occupy insane amounts of memory. That’s because already rendered pages are kept in the cache for faster revisit and as you scroll quickly through a large pdf (let’s assume a couple hundred pages), okular can easily occopy Gigabytes of RAM for a few MB sized pdf file.

The problem is existing for quite a while and a couple of version now and there is even a bug report in the kde bug tracker. As a quick fix, I would simply suggest to lower okular’s MemoryLevel. Modern processors usually render regular pages (eBooks, datasheets, application notes etc.) almost instantly and as long as you don’t mess around with technical drawings or other render-intensive stuff inside the pdf, there is really no reason to use heap space that aggressively.

You can either use the GUI (Settings → Configure Okular… → Performance → Memory Usage) to the the Memory Usage to “Low”,


Change "Memory Usage" to "Low" to prevent caching (click for full size image)

or change the MemoryLevel variable in .kde/share/config/okularpartrc to “Low”. If the variable (or the Dlg Performance-Section) doesn’t exist, simply create it.

[Dlg Performance]

Error “you have not created a bootloader stage1 target device”

This rather cryptic error may appear during a Fedora 16 installation and simply tries to tell you, that you forgot to create a BIOS boot partition.
If you’re doing a kickstart install, a look at Fedora’s Kickstart wiki page may be helpful. A big yellow alert box essentially tells you to add the following line

part biosboot --fstype=biosboot --size=1

to your kickstart file that used to work with Fedora versions <= 15. Resources: Fedora 16 common bugs

Changing rpmbuild working directory

      3 Comments on Changing rpmbuild working directory

Usually, rpmbuild related variables are set in ~/.rpmmacros. To change the current working directory, one could simply alter the default settings:

%_topdir      %(echo $HOME)/rpmbuild

This would change rpmbuild’s working directory on a per-user basis.

Sometimes it’s quite convenient to keep the default setting and change the working directory on a per-project basis:

$ rpmbuild --define "_topdir workingdir" -ba project.spec

To use the current directory as working directory, one could invoke rpmbuild as follows:

$ rpmbuild --define "_topdir `pwd`" -ba SPECS/project.spec

Careful: Double quotes are mandatory as well as having a proper subdirectory structure in the new working directory (BUILD, SRPM, RPM, SPECS and SOURCES).


Recovering deleted files with SleuthKit

      No Comments on Recovering deleted files with SleuthKit

SleuthKit is probably one of the most comprehensive collections of tools for forensic filesystem analysis. One of the most basic use-cases is the recovery of files that have been deleted. However, SleuthKit can do much, much more. Have a look at the case studies wiki page for an impression.

Let’s assume, there is a FAT volume on our disk (maybe a USB stick or a memory card) and we want to recover all deleted file. The safest way is probably to duplicate the entire volume first and perform an offline analysis. Again, there are quite a few tools for creating (forensic) images, the simpliest probably being dd.

To dump all partitions of your disk use

$ dd if=/dev/sdg of=/tmp/disk.img bs=512

Of course, you could also dump just one partition (e.g. /dev/sdg2).

To get the partition table layout, you can the use mmls on the image file

$ mmls /tmp/disk.img
DOS Partition Table
Offset Sector: 0
Units are in 512-byte sectors

     Slot    Start        End          Length       Description
00:  Meta    0000000000   0000000000   0000000001   Primary Table (#0)
01:  -----   0000000000   0000000134   0000000135   Unallocated
02:  00:00   0000000135   0003858623   0003858489   DOS FAT16 (0x06)
03:  -----   0003858624   0003862527   0000003904   Unallocated

fsstat is used to get information about the filesystem itself. In this case, the target partition starts with an offset of 135, so the imgoffset flag -o is mandatory (if you just dump a single partition, there is of course no offset and -o is not needed).

$ fsstat -o 135 /tmp/disk.img
File System Type: FAT16

OEM Name: MSDOS5.0
Volume ID: 0x34333064
Volume Label (Boot Sector): NO NAME    
Volume Label (Root Directory):
File System Type Label: FAT16   

Sectors before file system: 135

You can now do all sorts of things with your image file, e.g. recursively list all file and directories (including deleted ones);

$ fls -o 135 -r /tmp/disk.img

To recover a deleted file by inode number, you can use the command line tool icat

icat -o 135 -r /tmp/disk.img 54 > /tmp/DeletedPicture.jpg

For a quick overview and simple examples on other commands have a look at http://rationallyparanoid.com/articles/sleuth-kit.html.

The SleuthKit wiki points to a great script by Dave Henkewick that runs recursively through the image file and uses fls and icat to retrive the inode numbers and restore the files:

#!/usr/bin/perl -w
use strict;

# (C) 2004 dave (at) hoax (dot) ca
# ************* THIS SCRIPT HAS NO WARRANTY! **************
# this script works with the output from SleuthKit's fls and icat version: 3.00
# using afflib-3.3.4
# dont worry if you do not have the same versions because it should work unless
# the output from the commands have changed
# if the script does not work, please email me the debug output and
# the output from manually running fls and icat, thanks!
# set the recovery directory 
my $fullpath="/tmp/recover/";

# set the absolute path of fls binary
my $FLS="/usr/bin/fls";

# set the fls options
my @FLS_OPT=("-o","135","-f","fat","-pr","-m $fullpath","-s 0");

# set the path of the device to be recovered
my $FLS_IMG="/tmp/disk.img";

# set the inode of the directory to be recovered
my $FLS_inode="2";

# set the path of the icat STDERR log
my $ICAT_LOG="/tmp/icat.log";

# set the absolute path of the icat binary
my $ICAT="/usr/bin/icat";

# set the icat options
my @ICAT_OPT=("-o","135","-f","fat");


# here we go. hold on tight!

sub list($) {
	#make the recovery dir
	system("mkdir","-p","$fullpath") && die "Cannot mkdir $fullpath while processing: $_";
	#run a recursive FLS on our chosen inode and regex each line
	foreach $_ (`$FLS @FLS_OPT $FLS_IMG $_[0] 2>&1`) {
		#print $_;

sub regex($) {
	#first, regex for dirs, clean 'em up, and create 'em in recovery dir
	# the following regex will work on output of the format:
	# 0|/directory/file.foo.bar (deleted)|0|r/----------|0|0|0|0|0|0
	# 0|/directory/file.foo.bar|1384462|r/rrw-r--r--|1000|1000|971556|1218136846|1218136846|1225037181|0
	# 0|/directory|1392712|d/drwxr-xr-x|1000|1000|4096|1225309096|1225309096|1226059913|0
	# 0|/directory/file.foo.bar -> /directory2/file2.foo.bar|1384462|l/lrw-r--r--|1000|1000|971556|1218136846|1218136846|1225037181|0
	if (/(\d\|([\S\s]+)\|(\d+)\|\S\/d([\w-]{3})([\w-]{3})([\w-]{3})(\|\d+\|\d+\|\d+\|\d+\|\d+\|\d+\|\d+))/) {
		my $fulldir = $2;
		my $uid = $4; my $gid = $5; my $oid = $6;
		$fulldir =~ s/ (\(deleted(\)|\-realloc\)))$//g;
		$fulldir =~ s/ /_/g;
		$uid =~ s/-//g; $gid =~ s/-//g; $oid =~ s/-//g;
		$uid = lc($uid); $gid = lc($gid); $oid = lc($oid);
		#print "mkdir -p $fulldir\n";
		system("mkdir","-p","$fulldir") && die "Cannot mkdir $fulldir while processing: $_";
		#print "chmod u=$uid,g=$gid,o=$oid $fulldir\n";
		system("chmod","u=$uid,g=$gid,o=$oid","$fulldir") && die "Cannot chmod u=$uid,g=$gid,o=$oid $fulldir while processing: $_";
	#second, regex for files, sockets, fifos then
	#clean and dump them in recovery dir	
	} elsif (/(\d\|([\S\s]+)\|(\d+)\|\S\/(-|s|f|r)([\w-]{3})([\w-]{3})([\w-]{3})((\|\d+\|\d+\|\d+\|\d+\|\d+\|\d+\|\d+)|(\|\d+\|\d+\|\d+\|\d+\|\d+\|\d+)))/) {
		my $inode = $3;
		my $fullfile = $2;
		$fullfile =~ s/ (\(deleted(\)|\-realloc\)))$//g;
		$fullfile =~ s/ /_/g;
		#print "$ICAT @ICAT_OPT $ICAT_IMG $inode > $fullfile\n" if ($inode != 0);
		system("$ICAT @ICAT_OPT $ICAT_IMG $inode > \"$fullfile\" 2>> $ICAT_LOG") if ($inode != 0);
		#cannot use die cuz an invalid inode will kill the script
		#&& die "Cannot icat $inode into \"$fullfile\" while processing: $_"
	# thrid, regex for symlink, clean, and create in recovery dir
	} elsif (/(\d\|([\S\s]+)\s\-\>\s([\S\s]+)\|(\d+)\|\S\/(l)([\w-]{3})([\w-]{3})([\w-]{3})(\|\d+\|\d+\|\d+\|\d+\|\d+\|\d+\|\d+))/) {
		#print "$1\n";
		my $fullsym_dst = $2; my $fullsym_src = $3;
		$fullsym_dst =~ s/ /_/g; $fullsym_src =~ s/ /_/g;
		#print "ln -s $fullsym_src $fullsym_dst\n";
		system("ln","-s","$fullsym_src","$fullsym_dst") && die "Cannot ln $fullsym_src $fullsym_dst while processing: $_";
	} else {
		print "Unknown directory listing. File or directory NOT recovered\nDebug:\n$_[0]\n";
} #that's all folks. hope y'all had fun!

Another great tool is the Autopsy Forensic Browser, a graphical frontend to the SleuthKit commands that runs in your browser.

How to find multiple patterns with GNU findutils

      2 Comments on How to find multiple patterns with GNU findutils

Actually, searching for multiple patterns should be a trivial task. Find provides a -o operator (and many others) that lets you combine multiple expressions.

A simple Example: You want to find all files in the current directory whose filename extension are either .c or .h

$ find . \( -name "*.c" -o -name "*.h" \) -print

This is not limited to the -name test but can be combined with any other test (like -perm, -size, -type, etc.)

But Careful! You need to quote patterns that contain metacharacters (such as * in the example above). Singe quotes work as well as double quotes. The braces surrounding the expression have to be escaped, too. And watch those spaces right before and after the braces, they’re essential.

See also find manpage, GNU find documentation

Hack: How to disable recently used items list

      7 Comments on Hack: How to disable recently used items list

Quite a few applications use ~/.local/share/recently-used.xbel to keep track of a user’s most recent files. Unfortunately, not every application offers customization options to disable this list.

One possible solution, granted, a quite hacky one, is to clear recently-used.xbel and revoke a user’s permission to edit it again.

First, remove and re-create the file to clear it

$ rm -f ~/.local/share/recently-used.xbel 
$ touch ~/.local/share/recently-used.xbel 

You can then edit the permission so that the user can’t edit the file any more.

$ chmod -w ~/.local/share/recently-used.xbel 

For KDE, there is quite a similar mechanism. While there is no single file that stores the recently used items, a .desktop file is created in ~/.kde/share/apps/RecentDocuments/ for every item.

If if you revoke a user’s writing permission to that folder, KDE won’t add any item to the ‘recently used’ list.

$ chmod -w ~/.kde/share/apps/RecentDocuments/

Split flac file into tracks using a cuesheet

      1 Comment on Split flac file into tracks using a cuesheet

shnsplit (part of the “multi-purpose WAVE data processing and reporting utility” shntool package) provides a simple method to split flac files into individual tracks specified in a cuesheet.

$ shnsplit -o flac -f CUESHEET.cue -t %n.%t FLACFILE.flac

With the custom output format module, you can even transcode the tracks directly to another format, e.g. mp3, if your mobile music player doesn’t support flac.

snippets.dzone.com provides an exemplary script for this.

set -e
ENCODE="cust ext=mp3 lame -b 192 - %f"

if [ -z "$FLACFILE" ]; then
    echo "usage: flac2mp3 FLAC_FILE [CUE_FILE]"
    exit 1
elif [ -z "$CUEFILE" ]; then
    DIRECTORY=$(dirname "$FLACFILE")
    BASENAME=$(basename "$FLACFILE" ".flac")

shnsplit -O always -o "$ENCODE" -f "$CUEFILE" -t "$FORMAT" "$FLACFILE"

Convert subversion repository to GIT

      No Comments on Convert subversion repository to GIT

First, you need to create a file that maps the subversion authors to GIT users (let’s say /tmp/svnusers). The syntax is pretty easy:

kmartin = Kirk Martin <marty@localhost.com>
mattaway = Marshal Attaway <marshal@localhost.com>

To get a list of all your SVN authors, run

$ svn log --xml | grep author | sort -u | perl -pe 's/.>(.?)<./$1 = /'

on you subversion working copy.

Next, you have to create a temp directory (which will be cloned later to get rid of all the SVN stuff).

$ mkdir /tmp/MyProject_tmp
$ cd /tmp/MyProject_tmp

Now, you can fetch the SVN files from you subversion server

$ git-svn init svn+ssh://user@SVNHost/MyProject/trunk/ --no-metadata
$ git config svn.authorsfile /tmp/svnusers
$ git-svn fetch

Please note, that you may need to adjust the protocol (svn+ssh, http, https, ftp, etc.), user, host, path to the project files etc.
To get rid of all the SVN remains, simply clone the newly created GIT repo

$ git clone MyProject_tmp MyProject

Resources and further reading:

Specifying file encoding when writing dom Documents

Assumed, we got a fully parsed org.w3c.dom.Document:

Document doc;
//parse doc etc...

Just using LSSerializer‘s writeToString method without specifying any encoding will result in (rather impractical) UTF-16 encoded xml file per default

DOMImplementation impl = doc.getImplementation();
DOMImplementationLS implLS = (DOMImplementationLS) impl.getFeature("LS", "3.0");
LSSerializer lsSerializer = implLS.createLSSerializer();
lsSerializer.getDomConfig().setParameter("format-pretty-print", true);
String result = ser.writeToString(doc);

will output

<?xml version="1.0" encoding="UTF-16"?>

Unfortunately, specifying an encoding isn’t trivial. Here are two solutions that don’t require any third party libraries:

1. Using org.w3c.dom.ls.LSOutput

DOMImplementation impl = doc.getImplementation();
DOMImplementationLS implLS = (DOMImplementationLS) impl.getFeature("LS", "3.0");
LSSerializer lsSerializer = implLS.createLSSerializer();
lsSerializer.getDomConfig().setParameter("format-pretty-print", true);

LSOutput lsOutput = implLS.createLSOutput();
Writer stringWriter = new StringWriter();
lsSerializer.write(doc, lsOutput);

String result = stringWriter.toString();

2. Using javax.xml.transform.Transformer

Transformer transformer = TransformerFactory.newInstance().newTransformer();
transformer.setOutputProperty(OutputKeys.ENCODING, "UTF-8");
transformer.setOutputProperty(OutputKeys.INDENT, "yes");
transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "2");
DOMSource source = new DOMSource(doc);
Writer stringWriter = new StringWriter();
StreamResult streamResult = new StreamResult(stringWriter);
transformer.transform(source, streamResult);        
String result = stringWriter.toString();