2021-10-30

How to use HSQLDB for the first time

This blog post is a beginner-level tutorial explaining how to use HSQLDB, an SQL relational database system (RDBMS) written in Java.

HSQLDB supports both the client-server model (i.e. the server opens the database files) and the embedded model (i.e. opening the database files directly). In both cases we mean use the term client meaning the process which initiates the SQL statement.

Each HSQLDB table can have storage type memory, cached and text. Data in memory table is loaded to memory (from the .script file, parsed as SQL INSERT INTO statements at load time) when the database is opened, and saved (written back to the .script file) when the client disconnects. Data in cached table isn't loaded to memory in its entirety, but parts of it are read from the binary .data file (also called as the cache file) at query time; subsequent changes are kept in memory, and are written to the .data file when the client disconnects. Thus a huge number of inserts done by a single client uses a huge amount of memory. Data in a .text table is stored in a separate CSV file.

HSQLDB stores the database in a few files with different suffixes: .lck, .script, .properties, .data, .log, .backup and .lobs . Uncommitted data is not stored in any of the files. Committed data is first appended to the .log file as SQL INSERT INTO and DELETE FROM statements. When the client disconnects (either by exiting or invoking the SHUTDOWN; SQL statement), committed data is moved the the .script file (for table type memory, as SQL INSERT INTO statements with deleted rows omitted) or the .data file (for table type cached, as binary data which is faster to query). The .log and .script files are text files containing SQL statements, separated by newlines, no semicolon at the end. Data definition statements are first stored in the .log file (e.g. CREATE TABLE), and when the client disconnects, they are moved to the .script file (normalized as CREATE CACHED TABLE and CREATE MEMORY TABLE, table and column names uppercased).

Follow these steps to run your first few SQL statements:

  • Install Java (the JRE). The following command (without the leading $) should work after successful installation:
    $ java -version
    openjdk version "11.0.11" 2021-04-20
    ...
    
  • Download HSQLDB. When writing the blog post, .jar files were downloaded from http://hsqldb.org/download/hsqldb_251_jdk6/. You may use a later version. You need 2 .jar files: hsqldb-*.jar (not the hsqldb-*-sources.jar) and sqltool-*.jar. Rename the downloaded files to hsqldb.jar and sqltool.jar. It's important that the filename of the former is exactly hsqldb.jar, because sqltool.jar tries to find it by that name.
  • In your download directory, create a text file named sqltool.rc containing these 2 lines:
    urlid first
    url jdbc:hsqldb:file:/tmp/first.hsql;hsqldb.default_table_type=cached;hsqldb.script_format=3
    

    On Windows, use C:/Windows/Temp (with forward slashes) or something similar instead of /tmp as the directory name.

    There is no need to create the files, HSQLDB will create them automatically.

  • Open a terminal window, cd to your download directory, and run:
    java -jar sqltool.jar --rcFile=sqltool.rc first
    It displays a long welcome message, and shows you the sql> prompt.
  • Type some SQL statements (without the leading sql> prompt):
    sql> CREATE TABLE names (first VARCHAR(255), last VARCHAR(255));
    sql> INSERT INTO names VALUES ('first1', 'last1');
    sql> INSERT INTO names VALUES ('first2', 'last2');
    sql> COMMIT;
    sql> INSERT INTO names VALUES ('first3', 'last3');
    sql> COMMIT;
    sql> SELECT * FROM names;
    FIRST   LAST
    ------  -----
    first1  last1
    first2  last2
    first3  last3
    
    Fetched 3 rows.
    sql> DELETE FROM names WHERE last='last2';
    sql> CREATE INDEX myindex ON names (last);
    sql> COMMIT;
    sql> SELECT * FROM names;
    FIRST   LAST
    ------  -----
    first1  last1
    first3  last3
    
    Fetched 2 rows.
    sql> SHUTDOWN;
    sql> \q
    
  • At this point, metadata will be saved to compressed file /tmp/first.hsql.script, and data will be saved to binary file /tmp/first.hsql.data.
  • See this example .java file on using a HSQLDB database from Java code. The JDBC connection string (starting with jdbc:hsqldb:file: in the sqltool.rc file above) should be specified in the first argument of DriverManager.getConnection.

2020-12-02

How to use Docker on Linux amd64 without installing it

This blog post explains how to download and use Docker on Linux amd64 without installing it, and how to clean up afterwards. It complements the official instructions, and contains some copy-pasteable commands.

You will need to start dockerd as root, so we assume that you can run commands as root with sudo.

Visit https://download.docker.com/linux/static/stable/x86_64/, and choose a Docker version. For example, we will use 17.12.1-ce . Download it by running the following command:

$ (V=17.12.1-ce; mkdir "$HOME/Downloads/docker-$V" &&
  cd "$HOME/Downloads/docker-$V" &&
  wget -qO- "https://download.docker.com/linux/static/stable/x86_64/docker-$V.tgz" |
  tar xzv)

If you are using Ubuntu 14.04, you need to install a few small packages:

$ sudo apt-get install cgroup-lite aufs-tools  # Only on Ubuntu 14.04.

(cgroup-lite fixes the Error starting daemon: Devices cgroup isn't mounted error, and aufs-tools fixes the Couldn't run auplink before unmount warning. Optionally, see this answer on fixing the Module br_netfilter not found warning.)

In a new terminal window, start dockerd, and keep it running:

$ sudo env PATH="$HOME/Downloads/docker-17.12.1-ce/docker:$PATH" dockerd

Run this to fix permission issues connecting to /var/run/docker.sock:

$ sudo chgrp sudo /var/run/docker.sock

In each terminal window where you want to run docker commands, run this first:

$ export PATH="$HOME/Downloads/docker-17.12.1-ce/docker:$PATH"

Run this to test that everything is working (it will take a few seconds for the first time):

$ docker run --rm -it hello-world

Now you can run docker commands as usual.

To stop dockerd, press Ctrl-C in the terminal window it is running in, or run this:

$ sudo fuser -k -TERM /var/run/docker.sock

If there are some containers running, dockerd tries to kill them, and waits for 15 seconds. To avoid that delay, you can manually kill them (either before or after stopping dockerd):

$ (P=; test -e /var/run/docker.sock &&
        sudo chgrp sudo /var/run/docker.sock && P="$(docker ps -aq)";
        test "$P" && docker kill $P)

To delete Docker (including images and configuration), first stop dockerd, then run:

$ sudo rm -rf /etc/docker /var/lib/docker /var/run/docker /var/run/docker.*

2020-04-08

How to cross-compile to various EXE files using Digital Mars C Compiler on Linux

This blog post explains how to use Digital Mars C Compiler 8.57 on Linux with Wine to cross-compile to EXE files of 16-bit DOS, 32-bit DOS and 32-bit Windows (Windows 95 -- Windows 10). The actual program compiled is quite dumb (it doesn't matter), the focus is on installation and command-line flags.

Please note that Digital Mars C Compiler hasn't been ported to Linux, so we need to run it in emulation: we will be running the Win32 version in Wine. (Another option would be running the DOS version in DOSBox.) Digital Mars C Compiler is now open source, you may want to contribute porting it to Linux here.

Digital Mars C Compiler is very small: version 8.57 is less than 13 MiB, including C and C++ compiler, linker, DOS extender, #include files and libc for 16-bit DOS, 32-bit DOS and 32-bit Windows. If compressed with 7z (LZMA2), it's about 2.28 MiB. It's also impressive that all of it (except for the DOS extender) was written by a single person. See also discussion 1 and duscussion 2 on Hacker News, with replies from the author of the compiler.

All commands below are to be run in a Linux terminal window without the leading $.

Check that Wine is installed:

$ wine --version
wine-4.0.3 (Debian 4.0.3-1)

If you get an error message instead of a version number, then install Wine first using your package manager.

Download and extract the compiler:

$ mkdir digitalmarsc
$ R=http://ftp.digitalmars.com/Digital_Mars_C++/Patch/
$ wget -O digitalmarsc/dm857c.zip "$R"/dm857c.zip
$ (cd digitalmarsc && unzip dm857c.zip && rm -f dm857c.zip)
$ wget -O digitalmarsc/dm850dos.zip "$R"/dm850dos.zip
$ (cd digitalmarsc && unzip -n dm850dos.zip && rm -f dm850dos.zip)
$ wget -O digitalmarsc/dm831x.zip "$R"/dm831x.zip
$ (cd digitalmarsc && unzip -n dm831x.zip && rm -f dm831x.zip)
$ KR=https://github.com/Olde-Skuul/KitchenSink/raw/master/sdks/dos/x32/
$ wget -O digitalmarsc/dm/lib/cx.obj  "$KR"/cx.obj
$ wget -O digitalmarsc/dm/lib/x32.lib "$KR"/x32.lib
$ wget -O digitalmarsc/dm/lib/zlx.lod "$KR"/zlx.lod

Please note that cd857.zip contains additional files and it overlaps with dm850dos.zip (e.g. dm/lib/sds.lib).

Create the test program source code:

$ cat >myprog.c <<'END'
#include <stdio.h>  /* Code below works without it. */
const int answer = (int)0x44434241;  /* dd 'ABCD' */
int mul(int a, int b) {
  return a * b + answer;
}
int main(int argc, char **argv) {
  (void)argv;
  return mul(argc, argc);
}
END

There is no need to set up environment variables, because Digital Mars C Compiler can its library files nearby its own directory. But we make the shorthand wine dmc work for convenience:

$ export WINEPATH="$(winepath -w digitalmarsc/dm/bin)"

Do this to prevent Wine from displaying GUI windows:

$ unset DISPLAY

Compile to all targets:

$ wine dmc    -ml         -v0 -odcprog.dosl.exe  myprog.c
$ wine dmc    -ms         -v0 -odcprog.doss.exe  myprog.c
$ wine dmc    -mx x32.lib -v0 -odcprog.dos32.exe myprog.c
$ wine dmc    -mn         -v0 -odcprog.win32.exe myprog.c
$ wine dmc -c -ml         -v0 -odcprog.dosl.obj  myprog.c
$ wine dmc -c -ms         -v0 -odcprog.doss.obj  myprog.c
$ wine dmc -c -mx         -v0 -odcprog.dos32.obj myprog.c
$ wine dmc -c -mn         -v0 -odcprog.win32.obj myprog.c
$ wine dmc -c -mf         -v0 -odcprog.os232.obj myprog.c

More information about running Digital Mars C Compiler: https://digitalmars.com/ctg/sc.html . Please note that the dmc and sc commands were equivalent, but only dmc works in recent versions.

Check the file type of the created executable (*.exe) files:

$ file dcprog.*.exe
dcprog.dos32.exe: MS-DOS executable, MZ for MS-DOS
dcprog.dosl.exe:  MS-DOS executable
dcprog.doss.exe:  MS-DOS executable
dcprog.win32.exe: PE32 executable (console) Intel 80386, for MS Windows

Check the file type of the created object (relocatable, *.obj) files:

$ file dcprog.*.obj
dcprog.dos32.obj: 8086 relocatable (Microsoft), "myprog.c"
dcprog.dosl.obj:  8086 relocatable (Microsoft), "myprog.c"
dcprog.doss.obj:  8086 relocatable (Microsoft), "myprog.c"
dcprog.os232.obj: 8086 relocatable (Microsoft), "myprog.c"
dcprog.win32.obj: 8086 relocatable (Microsoft), "myprog.c"

More information about the OMF OBJ file format (*.obj above): https://pierrelib.pagesperso-orange.fr/exec_formats/OMF_v1.1.pdf

The *.obj files contain an memory model comment with code 0x9d which depends on the target and operating system: dosl has 0lO, doss has 0sO, dos32 has 3fOpd, dosx has 7xO, os232 has fnO, win32 has 7nO.

Alternatives for cross-compiling C code to some of these EXE targets on Linux:

  • OpenWatcom ((16-bit and 32-bit) * (DOS, Windows and OS/2) targets, see blog post for details)
  • Digital Mars C compiler (on Linux needs Wine, 16-bit DOS, 32-bit DOS, 32-bit Windows targets, plus 32-bit OS/2 object target (no linking), see above for details)
  • mingw-w64 (win32 and win64 targets)
  • gcc-ia16 (doss and dosl targets, i.e. 16-bit DOS EXE with various memory models)
  • djgpp-linux32 (dosx target, needs cwsdpmi.exe or pmodstub.exe (PMOED/DJ) as separate downloads)
  • You may be able to run C compilers released for Windows (e.g. the Digital Mars C compiler) using Wine.

2020-04-07

How to cross-compile to various EXE files using OpenWatcom C compiler on Linux

This blog post explains how to use OpenWatcom 2.0 C compiler on Linux (i386 or amd64, any distribution) to cross-compile to EXE files of 16-bit DOS, 32-bit DOS, 16-bit Windows (Windows 3.1), 32-bit Windows (Windows 95 -- Windows 10), 16-bit OS/2 (1.x) and 32-bit OS/2 (2.x). The actual program compiled is quite dumb (it doesn't matter), the focus is on installation and command-line flags.

All commands below are to be run in a Linux terminal window without the leading $.

Download and extract the compiler:

$ mkdir open-watcom-2 open-watcom-2/tmp
$ R=https://github.com/open-watcom/open-watcom-v2/releases
$ wget -O open-watcom-2/tmp/open-watcom-2.zip \
    "$R"/download/Current-build/open-watcom-2_0-c-linux-x86
$ (cd open-watcom-2/tmp && unzip open-watcom-2.zip)
$ (cd open-watcom-2/tmp && mv binl h lib286 lib386 ../)
$ (cd open-watcom-2/tmp && cp binw/dos32a.exe ../binl/)
$ (cd open-watcom-2/tmp && cp binw/dos4gw.exe ../binl/)
$ (cd open-watcom-2/binl && chmod +x owcc wcc wcc386 wlink)
$ rm -rf open-watcom-2/tmp

Create the test program source code:

$ cat >myprog.c <<'END'
#include <stdio.h>  /* Code below works without it. */
const int answer = (int)0x44434241;  /* dd 'ABCD' */
int mul(int a, int b) {
  return a * b + answer;
}
int main(int argc, char **argv) {
  (void)argv;
  return mul(argc, argc);
}
END

Set up compilation environment:

$ export WATCOM="$PWD/open-watcom-2"
$ export PATH="$WATCOM/binl:$PATH" INCLUDE="$WATOM/h"

Compile to all targets:

$ owcc    -bdos -mcmodel=l -o owprog.dosl.exe  myprog.c
$ owcc    -bdos -mcmodel=s -o owprog.doss.exe  myprog.c
$ owcc    -bdos32a         -o owprog.dos32.exe myprog.c
$ owcc    -bdos4g          -o owprog.dosx.exe  myprog.c
$ owcc    -bos2            -o owprog.os216.exe myprog.c
$ owcc    -bos2v2          -o owprog.os232.exe myprog.c
$ owcc    -bwindows        -o owprog.win16.exe myprog.c
$ owcc    -bnt             -o owprog.win32.exe myprog.c
$ owcc -c -bdos -mcmodel=l -o owprog.dosl.obj  myprog.c
$ owcc -c -bdos -mcmodel=s -o owprog.doss.obj  myprog.c
$ owcc -c -bdos32a         -o owprog.dos32.obj myprog.c
$ owcc -c -bdos4g          -o owprog.dosx.obj  myprog.c
$ owcc -c -bos2            -o owprog.os216.obj myprog.c
$ owcc -c -bos2v2          -o owprog.os232.obj myprog.c
$ owcc -c -bwindows        -o owprog.win16.obj myprog.c
$ owcc -c -bnt             -o owprog.win32.obj myprog.c

More information about OpenWatcom and running it on Linux: https://wiki.archlinux.org/index.php/Open_Watcom

Check the file type of the created executable (*.exe) files:

$ file owprog.*.exe
owprog.dos32.exe: MS-DOS executable, LE executable for MS-DOS, DOS/32A DOS extender (embedded)
owprog.dosl.exe:  MS-DOS executable, MZ for MS-DOS
owprog.doss.exe:  MS-DOS executable, MZ for MS-DOS
owprog.dosx.exe:  MS-DOS executable, LE executable
owprog.os216.exe: MS-DOS executable, NE for OS/2 1.x (EXE)
owprog.os232.exe: MS-DOS executable, LX for OS/2 (console) i80386
owprog.win16.exe: MS-DOS executable, NE for MS Windows 3.x (EXE)
owprog.win32.exe: PE32 executable (console) Intel 80386, for MS Windows

FYI the created owprog.dosx.exe file also needs the dos4gw.exe DOS Extender to run. You can copy the file from the binl directory.

Check the file type of the created object (relocatable, *.obj) files:

$ file owprog.*.obj
owprog.dos32.obj: 8086 relocatable (Microsoft), ".../myprog.c"
owprog.dosl.obj:  8086 relocatable (Microsoft), ".../myprog.c"
owprog.doss.obj:  8086 relocatable (Microsoft), ".../myprog.c"
owprog.dosx.obj:  8086 relocatable (Microsoft), ".../myprog.c"
owprog.os216.obj: 8086 relocatable (Microsoft), ".../myprog.c"
owprog.os232.obj: 8086 relocatable (Microsoft), ".../myprog.c"
owprog.win16.obj: 8086 relocatable (Microsoft), ".../myprog.c"
owprog.win32.obj: 8086 relocatable (Microsoft), ".../myprog.c"

More information about the OMF OBJ file format (*.obj above): https://pierrelib.pagesperso-orange.fr/exec_formats/OMF_v1.1.pdf

The *.obj files contain an Intel-specific comment with code 0x9b which depends on the target and operating system: dosl has 0lOed, doss has 0sOed, dos32 has 3fOpd, dosx has 3fOpd (the .obj file is the same for dos32, dosx, os232, win32), os216 has 0sOed (the .obj file is the same for os216 and doss), os232 has 3fOpd (the .obj file is the same for dos32, dosx, os232, win32), win16 has 0sOed (the .obj file differs by only 2 bytes in doss and win16), win32 has 3fOpd (the .obj file is the same for dos32, dosx, os232, win32).

Alternatives for cross-compiling C code to some of these EXE targets on Linux:

  • OpenWatcom ((16-bit and 32-bit) * (DOS, Windows and OS/2) targets, see above for details)
  • Digital Mars C compiler (16-bit DOS, 32-bit DOS, 32-bit Windows targets, plus 32-bit OS/2 object target (no linking), see blog post for details)
  • mingw-w64 (win32 and win64 targets)
  • gcc-ia16 (doss and dosl targets, i.e. 16-bit DOS EXE with various memory models)
  • djgpp-linux32 (dosx target, needs cwsdpmi.exe or pmodstub.exe (PMOED/DJ) as separate downloads)
  • You may be able to run C compilers released for Windows (e.g. the Digital Mars C compiler) using Wine.

2019-02-10

Speed of in-memory algorithms in scripting languages

This blog post gives some examples how much faster in-memory algorithms are in scripting languages than in C.

Before writing this blog post I had the general impression that the speed ratio between code in a scripting language and code in C for the same CPU-bound algorithm is between 5 and 20. I was very much surprised that for LZMA2 decompression I experienced a much larger ratio between Perl and C: 285.

Then I looked at the C speeds and Perl speeds on the Debian Computer Language Benchmarks Game, and I've found these ratios (in decreasing order) between Perl and C: 413, 79.7, 66.3, 62.2, 49.2, 20.8, 12.1, 10.2, 5.87, 1.91. So it turns out that there is a huge fluctuation in the speed ratio, depending on the algorithm.

Takeaways:

  • One doesn't need to use 64-bit registers or vector (SIMD) instructions (e.g. AVX, SSE, MMX) or other special instructions in C code to get a huge speed ratio: for LZMA2 decompression, there can be a huge speed difference even if all variables are 32-bit unsigned integers.
  • One doesn't need to use 64-bit code in C to get a huge speed ratio: for LZMA2 decompression, the benchmarked C code was running as 32-bit (i386, more specifically: i686) code.
  • One doesn't have to use C compiler optimization flags for fast execution (e.g. -O3 or -O2) to get a huge speed ratio: for LZMA2 decompression the size-optimized output of gcc -Os was that fast.
  • Cache usage (e.g. L1 cache, L2 cache, L3 cache) can have a huge effect on speed of C code. The https://github.com/pts/muxzcat/releases/download/v1/muxzcat is 7376 bytes in total, thus the code fits into the fastest L1 cache of modern Intel processors (L1 cache size at lest 8 KiB, typically at least 32 KiB). The data itself doesn't fit to the cache though.
  • I/O buffering and the associated memory copies can also affect execution speed. Typical size of read(2) calls is >60 KiB, and typical size of write(2) is even larger (2--3 times larger) for LZMA2 decompression, this is fast enough in both C and Perl code.
  • Memory allocation can also affect execution speed. The C code for LZMA2 decompression doesn't do any memory allocation. The algorithm of the Perl code doesn't do any either (but the Perl interpreter may do some as part of its overhead), except for the occasional exponential doubling of the string capacity. (Preallocating these string buffers didn't make it any faster.)
  • Even older C compilers (e.g. GCC 4.8 from 2014) can generate very optimized low-level i386 machine code.
  • Some scripting languages are faster than others, e.g. Lua in LuaJIT and JavaScript in Node.js are typically faster than the Python, Perl and Ruby interpreters written in C, and PyPy is faster than the Python interpreter written in C.
  • Different integer sizes (e.g. 8-bit, 16-bit, 32-bit, 64-bit) can affect execution speed. Sometimes larger integers are faster (e.g. 32-bit is faster than 16-bit), because they are better aligned in memory, and fewer conversion instructions are necessary.
  • Integer fixups can contribute to the slowness of scripting languages. For example the algorithm for LZMA2 decompression works with unsigned 32-bit integers, but Perl has only either signed 64-bit integers or signed 32-bit integers, so inputs of some operators (e.g. >>, <, ==, %, /) need to be bit-masked to get correct results. Out of these, / and % would be the slower to fix, but since LZMA2 decompression doesn't use these operators, < is the slowest: in total, the 32-bit Perl is 1.1017 times slower running the LZMA2 decompression than the 64-bit Perl, mostly because operator < and its possibly negative inputs need more complicated masking if Perl is doing 32-it arithmetic.
  • Function calls can be very slow in scripting languages, while the C compiler can inline some of the smaller functions, avoiding most of the overhead. For LZMA2 decompression, manual inlining of the fixup for operator < on 32-bit Perls made the entire program about 1.3 times faster.

Matching balanced parentheses with recursive Perl regular expressions

This blog post explains how to use recursive Perl regular expressions (regexp) to match substrings with balanced parentheses. Recursive regular expressions is also available in Ruby (with a different syntax) and in the regex extension of Python (but not in the built-in re module), but it's explicitly not available in RE2.

Let's suppose the input file in.txt contains lines like:

a = EQ(x + 6, 42);
a = EQ((x + 6) * 2, 42);
if (x + 6 == 42) { ... }
if (EQ(x + 6, 42)) { ... }
if (EQ((x + 6) * 2, 42)) { ... }
, and let's suppose we want to get all instances of EQ and if, with their arguments.

A non-recursive regexp can get only the instances without nested parentheses:

$ <in.txt perl -0777 -wne '
while (m@(?>\b(if|while|EQ|NE|LT|LE|GT|GE)\s*\(([^()]*)\))@g)
{ print "$1($2)\n" }'
EQ(x + 6, 42)
if(x + 6 == 42)
EQ(x + 6, 42)

With a recursive regexp we can get all matches:

$ <in.txt perl -0777 -wne '
while (m@(?>\b(if|while|EQ|NE|LT|LE|GT|GE)\s*(\(((?:[^()]+|(?2))*)\)))@g)
{ print "$1$2\n" }'
EQ(x + 6, 42)
EQ((x + 6) * 2, 42)
if(x + 6 == 42)
if(EQ(x + 6, 42))
if(EQ((x + 6) * 2, 42))

Please note that the EQ inside the if was not matched, because with the global flag (m@...@g) Perl doesn't consider overlapping or enclosed matches.

The recursive part of the regexp is the (?2): it's a recursive reuse of paren group 2. For more information about recursive regexps, see recursive subpattern in perlre(1). The (?>gt...) construct is a performance optimization to prevent backtracking.

It's also possible to match individual (comma-separated) arguments. For example, here is how to match both arguments of EQ separately, recursively:

$ <in.txt perl -0777 -wne '
while (
m@(>>\b(if|while|EQ|NE|LT|LE|GT|GE)\(((?:[^(),]+|(\(((?:[^()]+|(?3))*)\)))*),\s*((?2))\))@g)
{ print "$1($2, $5)\n" }'
EQ(x + 6, 42)
EQ((x + 6) * 2, 42)
EQ(x + 6, 42)
EQ((x + 6) * 2, 42)

The non-recursive version returns 2 arguments without nested parentheses:

$ <in.txt perl -0777 -wne '
while (m@(?>\b(if|while|EQ|NE|LT|LE|GT|GE)\s*\(([^(),]*),\s*([^(),]*))@g)
{ print "$1($2, $3)\n" }'
EQ(x + 6, 42)
EQ(x + 6, 42)

Here is how to match only a single argument (no comma) recursively:

$ <in.txt perl -0777 -wne '
while (
m@(>>\b(if|while|EQ|NE|LT|LE|GT|GE)\s*\(((?:[^(),]+|(\(((?:[^()]+|(?3))*)\)))*)\))@g)
{ print "$1($2)\n" }'
if(x + 6 == 42)
if(EQ(x + 6, 42))
if(EQ((x + 6) * 2, 42))

The non-recursive version returns 1 argument without nested parentheses:

$ <in.txt perl -0777 -wne '
while (m@(?>\b(if|while|EQ|NE|LT|LE|GT|GE)\s*\(([^(),]*)\))@g)
{ print "$1($2)\n" }'
if(x + 6 == 42)

2018-06-04

How to copy files securely between computers running Linux or Unix?

This blog post gives various recommendations on how to copy files securely between computers running Linux or Unix.

All the recommendations below copy the file in an encrypted way, protecting against eavesdropping and protecting partially against man-in-the-middle attacks (i.e. a thrid party tricking the receiver to accept forged file contents).

If both computers run either of Chrome or Firefox, and it's convenient for you to use these web browsers, visit any of the following sites to copy the file: sharedrop.io, reep.io, takeafile.com, send-anywhere.com, justbeamit.com. These sites use WebRTC (thus the transfer is encrypted) to copy the file directly from the sender to the receiver without uploading it to a server, and they traverse NAT firewalls using STUN and ICE. (Don't use sites based on WebTorrent (such as instant.io or file.pizza), because the WebTorrent transfers are not end-to-end encrypted.)

Otherise, if one of the computers is running the OpenSSH server (sshd), and the other one is able to connect to it over the network, and you know a user's password on the server (or SSH public keys are set up instead of a password), then use scp or rsync. Otherwise, if one of the computers is able to connect to the other over the network, and the client computer (the one which initiates the TCP connection) has the OpenSSH client (ssh) installed, you have root access on the server, and you don't mind installing software to the server temporarily, then follow the instructions in the One-off SCP with Dropbear section below.

The rest of the setups are typically useful if one of the computers is recently installed (so it doesn't contain your SSH private keys yet), or you don't want any of them act as a server, or you don't have root access.

Otherwise, if both computers are connected to the same local network (e.g. same wifi network), and they are able to connect to each other, try ecplcnw (available and documented here: https://github.com/pts/copystrap).

Otherwise, if both computers have web access, and you don't mind uploading securely encrypted files to a shared hosting provider, use ecptrsh (available and documented here: https://github.com/pts/copystrap).

Otherwise, if you have a USB pen drive, SD card, external hard disk or other writable storage medium which you can physically take from one computer to another, use ecplmdr (available and documented here: https://github.com/pts/copystrap).

Otherwise I have no secure and convenient recommendation for you.

Other secure options for file copy

  • Direct connection between the computers using an Ethernet cable or serial cable. This can work, but it is not convenient, because it needs rare hardware and increasingly rare ports on the laptops and extensive and error-prone manual setup.
  • netcat for transfer + GPG for encryption. Some more details here. This is similar to ecplcnw above, but less convenient and less secure, because user-invented passphrases tend to be weak, and strong passphrases are long and cumbersome to type. Also it's a bit inconvenient to the get the IP address in the command-lines right. Also the user has to remember some GPG quirks to get the security right: specifying --force-mdc and checking the return value of gpg -d.
  • USB pen drive + GPG for encryption. This is similar to ecplmdr above, but less convenient and less secure, because user-invented passphrases tend to be weak, and strong passphrases are long and cumbersome to type. Also the user has to remember some GPG quirks to get the security right: specifying --force-mdc and checking the return value of gpg -d.
  • Using a QR code and scanning it with the webcam: qrencode + zbarcam + GPG for encryption. This works for files smaller than about 10 KiB, because the resolution of the webcam in many laptops is not good enough to scan large QR codes. Without GPG this is not secure, in case someone is taking a video recording of the computer screen. Also the user has to remember some GPG quirks to get the security right: specifying --force-mdc and checking the return value of gpg -d.
  • Setting up the secret key in your YubiKey on one computer, copying the public key from it onto the second computer, and connecting via ssh to the second computer. This works if you already have a YubiKey, the first computer is nearby, and it's convenient for you to set up and dump keys on your YubiKey. How to retrieve the SSH public key from the YubiKey: use ssh-add -L | grep cardno: Because of the many skilled manual steps involved, this solution is less convenient than the recommendations above.
  • Setting up the secret key in your YubiKey on one computer, adding metadata, then using the list command in gpg --card-edit to get the metadata. This can be used to copy a few hundred bytes if both computers are nearby (i.e. you can connect the same YubiKey to both). This is similar to using an USB pen drive to copy files, but perhaps a bit more secure. (It's more secure only if an attacker stealing your YubiKey can't extract the metadata without knowing the passphrase. This has to be checked.)

Requirements

Security requirements:

  • t encrypts the data end-to-end, only the receiver is able to decrypt it.
  • The receiver is able to detect if the data is indeed what the sender has sent (e.g. it was not tampered with and it was not replaced by the data provided by the attacker).

Convenience requirements:

  • It works on the command-line.
  • It works as a regular user (non-root) on the both computers.
  • It works without software installation on the both computers.
  • It works without creating any file other than the output data file in the receiver. (We can relax this: a few small temporary files are OK, if they get removed automatically in the end.)
  • It works with very little typing (at most 20 characters of key typing in total). Copy-pasting is OK, but not between the sender and the receiver.
  • There is a mode which works on the local network without a public or local service running and without extra hardware (network cables or USB pen drives).
  • There is a mode which works without a local network and without extra hardware; it is allowed to use a public service.
  • There is a mode which works without any network (local network or internet); it is allowed to use a USB pen drive.

One-off SCP with Dropbear

If one of the computers (let's call it client) has the OpenSSH client (ssh) installed, and is able to connect to the other computer (let's call it the server), you have root access on the server, and the server doesn't have a working OpenSSH server (sshd) installed, and you don't mind intalling software to the server temporarily, you can follow these steps to copy files securely.

On the server, install Dropbear. For example, on Debian 9 or later, run this as root (without the leading #):

# apt-get install dropbear-bin

On the server, install the scp command-line tool, part of OpenSSH. For example, on Debian 9 or later:

# apt-get install openssh-client

On the server, generate an SSH host key, and start the server:

# dropbearkey -t rsa -s 4096 -f dbhostkey
# /usr/sbin/dropbear -r dbhostkey -F -E -m -w -j -k -p 64358 -P dbtmp.pid

The last command (dropbear) makes the Dropbear SSH server keep running and serving incoming connections until you press Ctrl-C in the terminal window. This is normal.

When dropbearkey above prints Fingerprint: md5, remember the value, because you will have to compare it with the value printed by the client.

On the client, initiate the copy with the following command (without the leading $):

$ SSH_AUTH_SOCK= scp -o Port=64358 -o HostName=... -o User=... \
    -F /dev/null -o UserKnownHostsFile=/dev/null \
    -o HostKeyAlgorithms=ssh-rsa -o FingerprintHash=md5 SOURCE DESTINATION

In the command above:

  • Specify HostName=... as the host name of the server.
  • Specify User=... as the non-root user name to be used on the server. scp will ask that user's password on the client.
  • SOURCE and DESTINATION can be a filename on the client, or, if prefixed by r:, then it's a filename inside the home directory of the user on the server.
  • If scp complains about FingerprintHash, then drop the -o FingerprintHash=md5, and try again.
  • When the client prints RSA key fingerprint is MD5:..., compare the ... value with the value printed by dropbearkey on the server. If they don't match perfectly, stop. If you continue even then, then you may be a victim of a man-in-the-middle attack, and your copy is not secure.

You may run multiple copies with scp between the client and the server.

As an alternative to scp, you can also use rsync to do the copies (if rsync is installed to both the client and the server). The command to be run on the client looks like this:

$ SSH_AUTH_SOCK= rsync -e 'ssh -o Port=64358 \
    -o HostName=... -o User=... -F /dev/null \  
    -o UserKnownHostsFile=/dev/null -o HostKeyAlgorithms=ssh-rsa \
    -o FingerprintHash=md5' --progress -avz SOURCE DESTINATION

Abort Dropbear on the server by pressing Ctrl-.

Having run the copies, remove unnecessary packages from the server. For example (do it carefully, don't remove anything you need), on Debian 9:

# sudo apt-get purge dropbear-bin libtommath1 libtomcrypt1
# sudo apt-get purge openssh-client

2018-04-28

How to force OpenSSH to log in with a specific password or public key

This blog post explains how to force the OpenSSH client to log in with a specific password or public key. This is useful if some of the SSH client config files (/etc/ssh/ssh_config, /etc/ssh/ssh_known_hosts, /etc/ssh/ssh_known_hosts2, ~/.ssh/config, ~/.ssh/known_hosts) or the ssh-agent are in a broken state, and you want to try whether login works independently of these client-side issues.

Run this command to log in, substituting the "${...}" values:

SSH_AUTH_SOCK= /usr/bin/ssh -F /dev/null \
    -o UserKnownHostsFile=/dev/null -o GlobalKnownHostsFile=/dev/null \
    -o StrictHostKeyChecking=no \
    -p "${PORT}" -i "${KEYFILE}" -- "${USERNAME}"@"${HOST}"

Usage notes:

  • To use the default port (22), drop the -p "${PORT}".
  • To use password login instead of public key login, drop the -i "${KEYFILE}".
  • If you don't know where your public key file is, try -i ~/.ssh/id_rsa
  • To use the same username as your local client username, drop the "${USERNAME}"@.

How it works:

  • SSH_AUTH_SOCK= disables the ssh-agent for this connection.
  • Spelling out /usr/bin/ssh makes sure that shell aliases, shell functions and strange directories in $PATH have no effect on which SSH client is used.
  • -o UserKnownHostsFile=/dev/null -o GlobalKnownHostsFile=/dev/null makes existing host keys in known_hosts files to be ignored, thus the connection will be established even if old or incorrect host keys are saved there. Please note that this also makes it impossible to detect a man-in-the-middle attack, so attackers may be able to steal your password if you use a password to log in; also attackers can steal the contents of your session (commands and their results).
  • -o StrictHostKeyChecking=no suppresses the prompt to add the host key to the known_hosts files.

2018-04-24

A quest to find a fast enclosure for multiple SATA 3.5" hard drives

This blog post documents the quest I'm undertaking to find a fast enclosure for multiple SATA 3.5" hard drives, supporting both USB 3 and eSATA, and the ability to read from both hard drives at the same time with at least 275 MB/s total speed. So far I haven't found a fast enough enclosure, so the quest is till ongoing. I'll keep updating the blog post with speed benchmark results.

The maximum sequential read speed my drives support are 112 MB/s and 170 MB/s. (There are much faster drives on the market, e.g. Seagate IronWolf NAS 10 TB can read 250 MB/s in the first 1 TB of the disk.)

I've decided not to order the IcyBox IB-RD3662U3S, because my online research indicates it would be too slow. It uses the chipset JMicron JMB 352 (produced in 2014), which doesn't support UASP (thus it's slow and it uses too much CPU) and maximum SATA speed is 3 Gbit/s.

I've ordered the StarTech S3520BU33ER instead, which uses the chipset JMicron JMS 562 (also produced in 2014), which supports UASP and maximum SATA speed is 6 Gbit/s. I'll run the benchmarks after it arrives.

I've also found OWC 0GB Mercury Elite Pro Dual RAID USB 3.1 / eSATA Enclosure Kit, which is potentially even faster. It supports USB 3.1, eSATA, UASP, and claims to be very fast: more than 400 MB/s over both USB and eSATA. It also uses the same chipset: JMicron JMS 562. It's avaialable from amazon.com and from the manufacturer's webshop (with expensive international delivery).

Depending on the computer it can be much faster to connect the 2 hard drives within separate single-drive enclosures, using separate USB 3 ports or using an unpowered hub. I'm not pursuing this option right now, because I have other uses for my USB ports, and I want low CPU usage (eSATA uses less than USB 3).

For a home media server, it may be cheaper to buy a NAS, e.g. QNAP TS-251+ with Ethernet and HDMI ports, DLNA with full HD video transcoding and other media server features, with maximum transfer speed of 224 MB/s. (Other kinds of QNAP NASes don't seem the be any faster.) However, with a NAS I wouldn't get the flexibility and configurability of stock Debian operating system running on a stock amd64 CPU with 4 GiB of RAM on this machine.

2018-04-21

How to update the BIOS on a Lenovo T400 laptop

This blog post explains how to update the BIOS to version 3.24 (released on 2012-12-16, latest release as of 2018-04-21) on a Lenovo T400 laptop.

This instructions below don't seem to work, I get an Operating system not found error when booting from the pen drive. Unfortunately I don't know how to fix that.

You will need a working and charged battery pack for the BIOS update, so install the battery pack first and start charging it.

If you are running Windows XP, Windows Vista or Windows 7 on the laptop, download the BIOS Update Utility from here (choose 32-bit or the 64-bit version depending on your Windows type, or try both versions if you don't know), and run it, and you are done.

Otherwise, if you are able to burn a CD or DVD (either on the Lenovo T400 laptop or on another computer), and you have a working DVD reader in the Lenovo T400, then download the installer DVD .iso from here, burn it to a DVD, insert the DVD to the Lenovo T400, reboot the Lenovo T400, press the blue ThinkVantage button (near the top left corner of the keyboard), press F12 to select a boot device, select the DVD, boot from it.

Otherwise, if you have a USB pen drive of at least 34 MB in size whose contents can be overwritten, and you have a Linux system running (either on the Lenovo T400 laptop or on another computer), then connect the pen drive, figure out the device name using sudo fdisk -l (typically it will be /dev/sdb or /dev/sdc, but be extra careful, otherwise you will overwrite the contents of some other drive), run this command to download: wget https://download.lenovo.com/ibmdl/pub/pc/pccbbs/mobiles/7uuj49uc.iso; run this command to copy the bootable BIOS update utility to the pen drive: sudo dd if=7uuj49uc.iso of=/dev/sdB bs=49152 skip=1; sync (replacing /dev/sdB with the device of the pen drive). Insert the pen drive to one of the USB slots of the Lenovo T400, reboot the Lenovo T400, press the blue ThinkVantage button (near the top left corner of the keyboard), press F12 to select a boot device, select USB HDD, boot from it.

After booting into the BIOS update utility, follow the instructions to update the system software. (Don't reboot or turn off until asked.) The next reboot will take longer, the Lenovo logo will appear and disappear 3 times. After that you are done.

Now if you enter the BIOS setup at boot time (by pressing the blue ThinkVantage button), you will see version 3.24 (7UET94WW) 2012-10-17.

2018-04-09

How to change which characters are selected by double-clicking in xterm

Various terminal emulators on Linux (e.g. xterm, gnome-terminal, rxvt) have word selection: when you double-click a character, it selects the entire word containing the character. This blog post explains how to customize which characters are part of a word in xterm.

The various defaults are for ASCII characters (in addition to digits and the letters a-z and A-Z):

  • gnome-terminal: # % & + , - . / = ? @ \ _ ~
  • rxvt: ! # $ % + - . / : _
  • xterm default: _
  • xterm in Ubuntu: ! # % & + , - . / : = ? @ _ ~

It's possible to customize which characters are part of a word in xterm by specifying the charClass resource. The values :48 mean: consider these characters part of a word. Other numbers are character ranges, for example 43-47 mean the ASCII characters 43 (+), 44 (,), 45 (-), 46 (.) and 47 (/).

Here is how to trigger various default behaviors from the command-line:

  • gnome-terminal: xterm -xrm '*.VT100.charClass: 35:48,37:48,38:48,43-47:48,61:48,63-64:48,92:48,95:58,126:48'
  • rxvt: xterm -xrm '*.VT100.charClass: 33:48,35-37:48,43:48,45-47:48,58:48,95:58'
  • xterm default: xterm -xrm '*.VT100.charClass: 95:48'
  • xterm in Ubuntu: xterm -xrm '*.VT100.charClass: 33:48,35:48,37-38:48,43-47:48,58:48,61:48,63-64:48,95:48,126:48'

To save the setting permanently, add a line like this to your ~/.Xresources file (create it if it doesn't exist):

! Here is a pattern that is useful for double-clicking on a URL (default xterm in Ubuntu):
XTerm.VT100.charClass: 33:48,35:48,37-38:48,43-47:48,58:48,61:48,63-64:48,95:48,126:48

Make sure above that the line containing charClass doesn't start with !, because that would be a comment.

The change takes affect automatically the next time you log in. To make it take effect earlier (for all xterms you start), run: xrdb -merge ~/.Xresources

2017-12-06

How to restrict an SSH user to file transfers

This blog post explains how a user on a Unix server can be restricted to file transfers only over SSH. The restriction is implemented by specifying a login shell which imposes a whitelist of allowed commands (e.g. rsync, sftp-server, scp, mkdir), and Unix permissions are used to restrict which files can be read and/or written by these commands.

Implementation using a custom login shell

First install Python 2 (as /usr/bin/python), then create a custom login shell binary, and save it to e.g. /usr/local/bin/transfer_shell. The contents of /usr/local/bin/transfer_shell should be:

#! /usr/bin/python
# by pts@fazekas.hu at Wed Dec  6 15:46:18 CET 2017

"""Login shell in Python 2 for SSH service restricted to data copying.

Use normal Unix permissions to restrict what files can be accessed.
"""

import os
import stat
import sys

if os.access(__file__, os.W_OK) or os.access(
    os.path.dirname(__file__), os.W_OK):
  sys.stderr.write('error: copy shell not safe\n')
if os.getenv('SSH_ORIGINAL_COMMAND', ''):
  sys.stderr.write('error: bad command= config\n')
  sys.exit(1)

#cmd = os.getenv('SSH_ORIGINAL_COMMAND', '').split()
#print >>sys.stderr, sys.argv
cs = (len(sys.argv) == 3 and sys.argv[1] == '-c' and sys.argv[2]) or ''
if cs == '/bin/sh .ssh/rc':
  sys.exit(0)
cmd = cs.split()
# cmd0 will be '' for interactive shells, thus it will be disallowed.
cmd0 = (cmd or ('',))[0]
#print >>sys.stderr, sorted(os.environ)
if cmd0 not in ('ls', 'pwd', 'id', 'cat', 'echo', 'cp', 'mv', 'rm',
                'mkdir', 'rmdir',
                'rsync', 'scp', '/usr/lib/openssh/sftp-server'):
  # In case of sftp, we can't write to stderr.
  sys.stderr.write('error: command not allowed: %s\n' % cmd0)
  sys.exit(1)
def is_scp_unsafe(cmd):
  has_tf = False
  for i in xrange(1, len(cmd) - 1):
    arg = cmd[i]
    if arg == '--' or not arg.startswith('-'):
      break
    elif arg in ('-t', '-f'):  # Flags indicating remote operation.
      has_tf = True
    elif arg not in ('-v', '-r', '-p', '-d'):
      return True
  return not has_tf
if ((cmd0 == 'rsync' and (len(cmd) < 2 or cmd[1] != '--server')) or
    cmd0 == 'scp' and is_scp_unsafe(cmd)):
  # This is to disallow arbitrary command execution with rsync -e and
  # scp -S.
  sys.stderr.write('error: command-line not allowed: %s\n' % cs)
  sys.exit(1)
os.environ['PATH'] = '/bin:/usr/bin'
os.environ.pop('DISPLAY', '')  # Disable X11.
os.environ.pop('XDG_SESSION_COOKIE', '')
os.environ.pop('XAUTHORITY', '')
try:
  os.chdir('data')
except OSError:
  sys.stderr.write('error: data dir not found\n')
  sys.exit(1)
try:
  # This is insecure: os.execl('/bin/sh', 'sh', '-c', cmd)
  os.execvp(cmd0, cmd)
except OSError:
  sys.stderr.write('error: command not found: %s\n' % cmd0)
  sys.exit(1)

Run these commands as root (without the leading #) to set the permissions transfer_shell:

# chown root.root /usr/local/bin/transfer_shell
# chmod 755       /usr/local/bin/transfer_shell

To set up restrictions for a new user

  1. Create the Unix user if not already created.
  2. Set up Unix groups and permissions on the system so the user doesn't have access to more files than he should have.
  3. Optionally, set up SSH public keys in ~/.ssh/authorized_keys for the user. No need to specify command="..." or other restrictions in that line.
  4. To change the login shell of the user, run this command as root (substituting USER with the login name of the user): chsh -s /usr/local/bin/transfer_shell USER
  5. Create a symlink named data in the home directory of the user. It should point to the default directory for file transfers.
  6. It's strongly recommended that you make the home directory and its contents unwritable by the user. Example command (run it as root, substitute USER): chown root.root ~USER ~USER/.ssh ~USER/.ssh/authorized_keys

Alternatives considered

  • Using a restrictive login shell and setting Unix file permissions. (This is implemented above, and also in scponly and rssh/) The disadvantage is that by accident the Unix permissions may be set up incorrectly (i.e. they are too permissive), and the user has access to too many files. Another disadvantage is that the custom login shell implementation may be vulnerable or hard to audit (example exploits for running arbitrary commands with rsync and scp: https://www.exploit-db.com/exploits/24795/).
  • Using a restrictive command="..." in ~/.ssh/authorized_keys. This is insecure, because OpenSSH sshd still runs ~/.bashrc and ~/.ssh/rc as shell scripts, and a malicious user could upload their own version of these files, or trigger some command execution in /etc/bash.bashrc. Any of these could lead to the user being able to execute arbitrary shell commands, which we don't want for this user.
  • Running a restrictive, custom SSH server implementation on a different port (while OpenSSH sshd is still running on port 22). This comes with its own risk of possible security bugs, and needs to be upgraded regularly. Also it can be complex to understand and set up correctly.
  • See some more alternatives here: https://serverfault.com/questions/83856/allow-scp-but-not-actual-login-using-ssh.

2017-10-06

Comparison of encrypted Git remote (remote repository) implementations

This blog post is a comparison of encrypted Git remote implementations. A Git remote is a combination of storage space on a remote server, remote server software and local software working together. An encrypted Git remote is a Git remote which makes sure that the storage space on the remote server contains the Git objects encrypted. It is useful if the Git repository contains sensitive information (e.g. passwords, bank account details), and the remote server is not trusted to keep such information hidden from unauthorized readers.

See the recent Hacker News dicsussion Keybase launches encrypted Git for the encrypted, hosted cloud Git remote provided by Keybase.

Comparison

  • name of the Git remote software
    • grg: git-remote-gcrypt
    • git-gpg: git-gpg
    • keybase: git-remote-keybase, the encrypted, hosted cloud Git remote provided by Keybase
  • does it support collaboration (users with different keys pull and push)?
    • grg: yes
    • git-gpg: yes
    • keybase: yes
  • does it encrypt the local .git repository directory?
    • grg: no
    • git-gpg: no
    • keybase: no
  • does it encrypt any files in the local working tree?
    • grg: no
    • git-gpg: no
    • keybase: no
  • does it encrypt the remote repository users push to?
    • grg: yes, it encrypts locally before push
    • git-gpg: yes, it encrypts locally before push
    • keybase: yes, it encrypts locally before push
  • by looking at the remote files, can anyone learn the total the number of Git objects?
    • grg: no
    • git-gpg: yes
    • keybase: probably yes
  • can root on the remote server learn the list of contributors (users who do git pull and/or git push)?
    • grg: yes, by making sshd log which SSH public key was used
    • git-gpg: yes, by making sshd log which SSH public key was used
    • keybase: yes
  • by looking at the remote files, can anyone learn the list of contributors (users who do git pull and/or git push)?
    • grg: no
    • git-gpg: no
    • keybase: probably yes
  • by looking at the remote files, can anyone learn when data was pushed?
    • grg: yes
    • git-gpg: yes
    • keybase: probably yes
  • does it support hosting of encrypted remotes on your own server?
    • grg: yes
    • git-gpg: yes
    • keybase: no, at least not by default, and not documented
  • supported remote repository types
    • grg: rsync, local directory, sftp, git repo (local or remote)
    • git-gpg: rsync, local directory
    • keybase: custom, data is stored on KBFS (Keybase filesystem, an encrypted network filesystem)
  • required software on the remote server
    • grg: sshd, (rsync or sftp-server or git)
    • git-gpg: sshd, rsync
    • keybase: custom, the KBFS server, there are no official installation instructions
  • required local software
    • grg: git, gpg, ssh, (rsync or sftp), git-remote-gcrypt
    • git-gpg: git, gpg, ssh, rsync, Python (2.6 or 2.7), git-gpg
    • keybase: binaries provided by Keybase: keybase, git-remote-keybase, kbfsfuse (only for remote repository creation)
  • product URL with installation instructions
    • grg: https://git.spwhitton.name/git-remote-gcrypt/tree/README.rst
    • git-gpg: https://github.com/glassroom/git-gpg
    • keybase: https://keybase.io/blog/encrypted-git-for-everyone
  • source code URL
    • grg: https://git.spwhitton.name/git-remote-gcrypt/tree/git-remote-gcrypt
    • git-gpg: https://github.com/glassroom/git-gpg/blob/master/git-gpg
    • keybase: https://github.com/keybase/kbfs/blob/master/kbfsgit/git-remote-keybase/main.go
  • implementation language
    • grg: Unix shell (e.g. Bash), single file
    • git-gpg: Python 2.6 and 2.7, single file
    • keybase: Go
  • source code size, number of bytes, including comments
    • grg: 21 448 bytes
    • git-gpg: 19 702 bytes
    • keybase: 5 617 305 bytes (including client/go/libkb/**/*.go and kbfs/{env,kbfsgit,libfs,libgit,libkbfs}/**/*.go)
  • is the source code easy to understand?
    • grg: yes, but some developers reported it's less easy than git-gpg
    • git-gpg: yes
    • keybase: no, because it's huge; individual pieces are simple
  • encryption tool used
    • grg: gpg (works with old versions, e.g. 1.4.10 from 2008)
    • git-gpg: gpg (works with old versions, e.g. 1.4.10 from 2008)
    • keybase: custom, written in Go
  • is it implemented as a Git remote helper?
    • grg: yes, git push etc. works
    • git-gpg: no, it works as git gpg push instead of git push etc.
    • keybase: yes, git push etc. works
  • how much extra disk space does it use locally, per repository?
    • grg: less than 1000 bytes
    • git-gpg: stores 2 extra copies of the .git repository locally, one of them containing only loose objects (thus mostly uncompressed)
    • keybase: less than 1000 bytes
  • how much disk space does it use remotely, per repository?
    • grg: one encrypted packfile for each push, encryption has a small (constant) overhead, occasionally runs git repack (locally, and uploads the result), and right after repacking it stores only 1 packfile (plus a small metadata file) per repository
    • git-gpg: one encrypted file for each object, encryption has a small (constant) overhead, no packfiles (thus the remote repository will be large and contain a lot of files, because of the lack of diff compression supported by the packfiles)
    • probably one encrypted file for each object

2017-09-02

How to run Windows XP on Linux using QEMU and KVM

This blog post is a tutorial explaining how to run Windows XP as a guest operating system using QEMU and KVM on a Linux host. It should take less then 16 minutes, including installation.

Requirements: You need a recent Linux system (Ubuntu 14.04 LTS will work) with a GUI, 620 MB of free disk space and 550 MB of free memory. If you don't want to browse the web from Windows XP, then 300 MB of free memory is enough.

Software used:

  • The latest version of Hiren's BootCD (version 15.2) was released on 2012-11-09. It contains a live (no need to install) mini Windows XP system with a web browser (Opera). (Additionally, it contains hundreds of system rescue, data recovery, antivirus, backup, password recovery, hard disk diagnostics and system diagnostics tools. To see many of them with screenshot, look at this article about Hiren's BootCD, or click on the See CD Contents link on the official Hiren's BootCD download page.)
  • QEMU. It's a fully system emulator, which can emulate multiple architectures, and it can run multiple operating systems as a guest.
  • KVM. It's a fast virtualization (emulation) of guest operating systems on Linux. It's used by QEMU, and it lets QEMU execute the CPU-intensive operations on guest systems quickly, with only 10% or less overhead. (I/O-intensive operations can be much slower.)

Log in to the GUI, open a terminal window, and run the following command (without the leading $, copy-paste it as a single, big, multiline paste):

$ python -c'import os, struct, sys, zlib
def work(f):  # Extracts the .iso from the .zip on the fly.
 while 1:
  data, i, c, s = f.read(4), 0, 0, 0
  if data[:3] in ("PK\1", "PK\5", "PK\6"): return f.read()
  assert data[:4] == "PK\3\4", repr(data); data = f.read(26)
  _, _, mth, _, _, _, cs, us, fnl, efl = struct.unpack("<HHHHHlLLHH", data)
  fn = f.read(fnl); assert len(fn) == fnl
  ef = f.read(efl); assert len(ef) == efl
  if fn.endswith(".iso"): uf = open("hirens.iso", "wb")
  else: mth = -1
  if mth == 8: zd = zlib.decompressobj(-15)
  while i < cs:
   j = min(65536, cs - i); data = f.read(j); assert len(data) == j; i += j
   if mth == 8: data = zd.decompress(data)
   if mth != -1: uf.write(data)
  if mth == 8: uf.write(zd.flush())
work(os.popen("wget -nv -O- "
    "http://www.hirensbootcd.org/files/Hirens.BootCD.15.2.zip"))'

The command above downloads the Hiren's BootCD image and extracts it to the file hirens.iso. (Alternatively, you could download from your browser and extract the .iso manually. That would use more temporary disk space.)

Install QEMU. If you have a Debian or Ubuntu system, do it so by running the command (without the leading $):

$ sudo apt-get install qemu-system-x86

On other Linux systems, use your package manager to install QEMU with KVM support.

The only step in this tutorial which needs root access (and thus the root password) is the QEMU installation above.

Run the following command in your terminal window (without the leading $, copy-paste it):

$ SDL_VIDEO_X11_DGAMOUSE=0 qemu-system-i386 -m 512 -machine pc-1.0,accel=kvm \
    -cdrom hirens.iso -localtime -net nic -net user -smb "$HOME"

This command will start a virtual machine running Hiren's Boot CD, and it will display it in a window (of size 800x600). The command will not exit until you close the window (and thus abort the virtual machine).

The virtual machine will use 512 MB of memory (as specified by -mem 512 above. It's possible for the mini Windows XP to use less memory, e.g. you if you specify -mem 256 instead, then it will still work, but web browsing (with Opera) won't work, and you will have to click OK on the Your system is low on vitual memory. dialog later.

In a few seconds, the boot menu of Hiren's BootCD is displayed in the QEMU window:

Press the down arrow key and press Enter to choose Mini Windows Xp. Then wait about 1 minute for Windows XP to start. It will look like this:

To use the mouse within the QEMU window, click on the window. To release your mouse (to be used in other windows), press Ctrl and Alt at the same time.

Networking (such as web and file sharing) is not enabled by default. To enable it, click on the Network Setup icon in the QEMU window desktop, and wait about 20 seconds. The IP address of the guest Windows XP is 10.0.2.15, and the IP address of host Linux system is 10.0.2.2. Because of the user mode networking emulation provided by QEMU, external TCP connections can also be made from Windows XP (e.g. you can browse the web). Please note that ping won't work (because QEMU doesn't emulate that).

To browse the web, click on the Internet icon in the QEMU Windows desktop. It will start the Opera browser. Web browsing will be quite slow, so better try some fast sites such as google.com or whatismyip.com.

To use the command line, click on the Command prompt icon in the QEMU Windows desktop. There is a useful command to type to that window: net use s: \\10.0.2.4\qemu (press Enter after typing it). That will make your Linux home folder available as drive S: in Windows XP, for reading and writing. (You can change which folder to make available by specifying it after -smb when starting QEMU.)

Copy-pasting between Linux and Windows XP clipboards doesn't work.

You can make the QEMU window larger by changing Start menu / Settings / Control Panel / Display / Settings / Screen resoluton to 1024 by 768 pixels. The 1024x768 shortcut on the QEMU Windows desktop doesn't work,

Because of efficient CPU virtualization by KVM, an idle Windows XP in a QEMU window doesn't use more than 10% CPU on the host Linux system.

Hiren's boot CD contains hundrends of Windows apps. Only a fraction of the apps are available from the Windows XP start menu. To see all apps, click on the HBCD Menu icon in the QEMU Windows desktop, and then click on the Browser Folder button.

2017-02-28

How to avoid unnecessary copies when appending to a C++ vector

This blog post explains how to avoid unnecessary copies when appending to a C++ std::vector, and recommends the fast_vector_append helper library, which eliminates most copies automatically.

TL;DR If you are using C++11, and your element classes have an efficient move constructor defined, then just use push_back to append, it won't do any unnecessary copies. In addition to that, if you are constructing the to-be-appended element, you can use emplace_back to append, which even avoids the (otherwise fast) call to the move constructor.

Copying is slow and needs a lot of (temporary memory) if the object contains lots of data. Such an object is a long std::string: the entire array of characters get copied to a new array. This hurts performance if the copy is unnecessary, e.g. if only a temporary copy is made. For example:

std::string create_long_string(int);

std::vector<std::string> v;
{
  // Case A.
  std::string s1 = create_long_string(1);
  std::string s2 = create_long_string(2);
  std::string s3 = create_long_string(3);
  // Case B.
  v.push_back(s1);
  std::cout << s1;
}
// Case C.
v.push_back("foo");
// Case D, from C++11.
v.emplace_back("foo");
// Case E.
v.push_back(create_long_string(4));
// Case F.
v.push_back(std::string()); v.back().swap(s2);
// Case G, from C++11.
v.push_back(std::move(s3));

In Case A, return value optimization prevents the unnecessary copying: the string built in the function body of create_long_string is placed directly to s1.

In Case B, a copy has to be made (there is no way around it), because v is still valid after s1 is destroyed, thus it cannot reuse the data in s1.

Case C could work without a copy, but in C++98 an unnecessary copy is made. So first std::string("foo") is called (which makes a copy of the data), and then the copy constructor of std::string is called to create a new string (with a 2nd copy of the data), which gets added to v.

Case D avoids the 2nd (unnecessary) copy, but it works only from C++11. In earlier versions of C++ (such as C++98), std::vector doesn't have the emplace_back method.

In Case E, there is an unnecessary copy in C++98: create_long_string creates an std::string, and it gets copied to a new std::string within v. It would be better if create_long_string could create the std::string to its final location.

Case F shows the workaround in C++98 of adding s2 to an std::vector without a copy. It's a workaround because it's a bit ugly and it still involves some copying. Fortunately this copying is fast: it copies only the empty string. As a side effect, the value of s2 is lost, it will then be the empty string.

Case G shows the C++11 way of adding s3 to an std::vector without a copy. It doesn't work in C++98 (there is no std::move in C++98). The std::move(s3) visibly documents that the old value of s3 is lost.

C++11 (the version of C++ after C++98) introduces rvalue references, move constructors and move semantics to avoid unnecessary copies. This will fix both Case C and Case E. For this to work, new code needs to be added to the element class (in our case std::string) and to the container class (in our case std::vector) as well. Fortunately, the callers (including our code above and the body of create_long_string) can be kept unchanged. The following code has been added to the C++ standard library (STL) in C++11:

class string {
  ...
  // Copy constructor. C++98, C++11.
  string(const string& old) { ... }
  // Move constructor. Not in C++98, added in C++11.
  string(string&& old) { ... } ... }
};

template<typename T, ...>
class vector {
  ...
  // Takes a const reference. C++98, C++11.
  void push_back(const T& t);
  // Takes an rvalue reference. Not in C++98, added in C++11.
  void push_back(T&& t);
};

As soon as both of these are added, when v.push_back(...) will attempt to call the 2nd method (which takes the rvalue reference), which will call to move constructor of std::string instead of the copy constructor. This gives us the benefit of no copying, because typically the move constructor is fast, because it doesn't copy data. In general, the move constructor creates the new object with the data of the old object, and it can leave the old old object in an arbitrary but valid state. For std::string, it just copies the pointer to the data (which is fast, because it doesn't copy the data itself), and sets the pointer in the old std::string to nullptr. Thus Case C and Case E become fast in C++11. Case B is not affected (it still copies), and that's good, because we want to print s1 to cout below, so we want that data there. This happens automatically, because in the call v.push_back(s1), s1 is not an rvalue reference, thus the cost-reference push_back will be called, which does a copy. For more details about the magic to select the proper push_back, see this tutorial or this tutorial.

Guidelines to avoid unnecessary copies

Define your (element) classes like this:

  • Define the default constructor (C() { ... }).
  • Define the destructor (~C() { ... }).
  • Define the copy constructor (C(const C& c) { ... }).
  • It's a good practice to define operator=, but not needed here.
  • For C++11 classes, define a move constructor (e.g. C(C&& c) { ... }).
  • For C++11 classes, don't define a member swap method. If you must define it, then also define a method void shrink_to_fit() { ... }. It doesn't matter what the method does, you can just declare it. The fast_vector_append library detects shrink_to_fit, and will use the move constructor instead of the swap method, the former being slightly faster, although neither copies the data.
  • For C++98 classes, don't define a move constructor. In fact, C++98 doesn't support move constructors.
  • For C++98 classes, define a member swap method.

To append a new element to an std::vector without unnecessary copying, as fast as possible, follow this advice from top to bottom:

  • If it's C++11 mode, and the object is being constructed (not returned by a function!), use emplace_back without the element class name.
  • If it's C++11 mode, and the class has a move constructor, use push_back.
  • If it's C++11 mode, and the class has the member swap method, use: { C c(42); v.resize(v.size() + 1); v.back().swap(c); }
  • If the class has the member swap method, use: { C c(42); v.push_back(C()); v.back().swap(c); }
  • Use push_back. (This is the only case with a slow copy.)

Automating the avoidance of unnecessary copies when appending to a vector

It would be awesome if the compiler could guess the programmer's intentions, e.g. it would pick emplace_back if it is faster than push_back, and it will avoid the copy even in C++98 code, e.g. it will use swap if it's available, but the move constructor isn't. This is important because sometimes it's inconvenient to modify old parts of a codebase defining the element class, and it already has swap.

For automation, use fast_vector_append(v, ...) in the fast_vector_append library to append elements to an std::vector. It works in both C++98 and C++11, but it can avoid more copies in C++11. The example above looks like:

#include "fast_vector_append.h"
std::string create_long_string(int);

std::vector<std::string> v;
{
  // Case A. No copy.
  std::string s1 = create_long_string(1);
  std::string s2 = create_long_string(2);
  std::string s3 = create_long_string(3);
  // Case B. Copied.
  fast_vector_append(v, s1);
  std::cout << s1;
}
// Case C. Not copied.
fast_vector_append(v, "foo");
// Case D. Not copied.
fast_vector_append(v, "foo");
// Case E. Copied in C++98.
fast_vector_append(v, create_long_string(4));
{ std::string s4 = create_long_string(4);
  // Case E2. Not copied.
  fast_vector_append_move(v, s4);
}
// Case F. Not copied.
fast_vector_append_move(v, s2);
// Case G. Not copied.
fast_vector_append_move(v, s3);
// Case H. Copied in C++98.
fast_vector_append(v, std::string("foo"));

Autodetection of class features with SFINAE

The library fast_vector_append does some interesting SFINAE tricks to autodetect the features of the element class, so that it will be able to use the fastest way of appending supported by the class.

For example, this is how it detects whether to use the member swap method:

// Use swap iff: has swap, does't have std::get, doesn't have shrink_to_fit,
// doesn't have emplace, doesn't have remove_suffix. By doing so we match
// all C++11, C++14 and C++17 STL templates except for std::optional and
// std::any. Not matching a few of them is not a problem because then member
// .swap will be used on them, and that's good enough.
//
// Based on HAS_MEM_FUNC in http://stackoverflow.com/a/264088/97248 .  
// Based on decltype(...) in http://stackoverflow.com/a/6324863/97248 .
template<typename T>   
struct __aph_use_swap {
  template <typename U, U> struct type_check;
  // This also checks the return type of swap (void). The checks with
  // decltype below don't check the return type.
  template <typename B> static char (&chk_swap(type_check<void(B::*)(B&), &B::swap>*))[2];
  template <typename  > static char (&chk_swap(...))[1];
  template <typename B> static char (&chk_get(decltype(std::get<0>(*(B*)0), 0)))[1];  
// ^^^ C++11 only: std::pair, std::tuple, std::variant, std::array. template <typename > static char (&chk_get(...))[2]; template <typename B> static char (&chk_s2f(decltype(((B*)0)->shrink_to_fit(), 0)))[1];
// ^^^ C++11 only: std::vector, std::deque, std::string, std::basic_string. template <typename > static char (&chk_s2f(...))[2]; template <typename B> static char (&chk_empl(decltype(((B*)0)->emplace(), 0)))[1];
// ^^^ C++11 only: std::vector, std::deque, std::set, std::multiset, std::map, std::multimap, std::unordered_multiset, std::unordered_map, std::unordered_multimap, std::stack, std::queue, std::priority_queue. template <typename > static char (&chk_empl(...))[2]; template <typename B> static char (&chk_rsuf(decltype(((B*)0)->remove_suffix(0), 0)))[1];
// ^^^ C++17 only: std::string_view, std::basic_string_view. template <typename > static char (&chk_rsuf(...))[2]; static bool const value = sizeof(chk_swap<T>(0)) == 2 && sizeof(chk_get<T>(0)) == 2
&& sizeof(chk_s2f<T>(0)) == 2 && sizeof(chk_empl<T>(0)) == 2
&& sizeof(chk_rsuf<T>(0)) == 2; };

The autodetection is used like this, to select one of the 2 implementations (either with v.back().swap(t) or v.push_back(std::move(t))):

template<typename V, typename T> static inline
typename std::enable_if<std::is_same<typename V::value_type, T>::value &&
    __aph_use_swap<typename V::value_type>::value, void>::type
fast_vector_append(V& v, T&& t) { v.resize(v.size() + 1); v.back().swap(t); }                               

template<typename V, typename T> static inline
typename std::enable_if<std::is_same<typename V::value_type, T>::value &&
    !__aph_use_swap<typename V::value_type>::value, void>::type
fast_vector_append(V& v, T&& t) { v.push_back(std::move(t)); }