Vim mini tutorial – bookmark

ma - create bookmark
1G - jump to beginning of first line
`a - jump to bookmark

You can create numerous, named, bookmarks inside each file. Use letters a-z.

FreeBSD + SVN + new host IP

Recently, I had “small” issue with my SVN host. I have changed IP address of my NAS server and it made SVN failing to work.

All you have to do is to update this entry inside this file


Make sure you take care of new host address (in case you have specified it in the past)

svnserve_flags="-d --listen-port=3690 --listen-host"

Make sure host address matches your new IP.

bash – getting part of variable (e.g. last 4 characters and prefix)

Recently, I had this small issue, how to split variable in bash without too much effort. I wanted to split it into two parts: prefix and suffix.

There were few assumptions, when it comes to variable value.

– length of variable is >= 5
– it has fixed length of suffix (4 characters)
– prefixes lengths can vary

suffix=${variable: -4}
echo "variable: $variable prefix: $prefix suffix $suffix"

macOS High Sierra – make sure your system is safe

If you want to make sure that your macOS High Sierra is clean (when it comes to malicious software) you can use free tool (free as in beer and free as in speech – at the same time) called ClamAV.

You can get it various ways. You can download it’s commercial version from AppStore – as paid release, you can install it using brew, download binary from some place where you have no idea what’s really inside, You can instal macOS Server (ClamAV comes bundled with it), etc.

However, you can also build it by yourself. Directly from sources. It’s a pain in a neck, I know, but you can be sure of what you are actually running. And, you will learn that zlib’s library author is a really brainy person. Go ahead, look for yourself in Wikipedia.

Anyway. Let’s start. Estimated time to complete (depending on your system configuration) – 1h-2h.

I suggest to create some place, where you can put all sources and binaries. I suggest following approach

mkdir -p $HOME/opt/src
mkdir -p $HOME/opt/usr/local

In each step, we will download source codes of given tool into


and then, use

./configure --prefix=$HOME/opt/usr/local/$TOOL_NAME

to install them inside $HOME/opt.

1. You need PCRE – Perl Compatible Regular Expressions

cd $HOME/opt/src
curl -O
tar zxf pcre2-10.30.tar.gz
cd pcre2-10.30
./configure --prefix=$HOME//opt/usr/local/pcre2
make install

# You can also run `make check` before installing PCRE, but you may need to apply path
# source:
# To apply it, simply put patch content inside file RunGrepTest.fix
# --- 8< --- CUT HERE --- 8< --- CUT HERE --- 8< --- CUT HERE --- 8< --- 
--- RunGrepTest	2017-07-18 18:47:56.000000000 +0200
+++ RunGrepTest.fix	2018-01-07 20:00:40.000000000 +0100
@@ -681,7 +681,7 @@
 # works.

 printf "%c--------------------------- Test N7 ------------------------------\r\n" - >>testtrygrep
-if [ `uname` != "SunOS" ] ; then
+if [ `uname` != "Darwin" ] ; then
   printf "abc\0def" >testNinputgrep
   $valgrind $vjs $pcre2grep -na --newline=nul "^(abc|def)" testNinputgrep | sed 's/\x00/ZERO/' >>testtrygrep
   echo "" >>testtrygrep
# --- 8< --- CUT HERE --- 8< --- CUT HERE --- 8< --- CUT HERE --- 8< ---
# and run patch tool
patch -b RunGrepTest RunGrepTest.fix

2. You need recent release of clang and llvm

cd $HOME/opt/src
curl -O
cd $HOME/usr/local
tar xvf $HOME/opt/src/clang+llvm-3.6.2-x86_64-apple-darwin.tar.xz

You can’t use more recent version of Apple’s LLVM :( It means, that you may require two, separate installations of LLVM. Maximum version of LLVN you can use here is 3.6. Contrary, compiling R with OpenMP support will require Version 4.0.1 (take a look here: R 3.4, rJava, macOS and even more mess ;))

3. You need LibreSSL
(special thanks go to: I was always using OpenSSL, but recently I had more and more issues with it while compiling stuff from sources.

cd $HOME/opt/src
curl -O
tar zxf libressl-2.6.4.tar.gz
cd libressl-2.6.4
export CXXFLAGS="-O3"
export CFLAGS="-O3"
./configure --prefix=$HOME/opt/usr/local/libressl
make check
make install

4. You need zlib

cd $HOME/opt/src
curl -O
tar zxf zlib-1.2.11.tar.xz
cd zlib-1.2.11
./configure --prefix=$HOME/opt/usr/local/zlib
make install

5. Build the stuff

export CFLAGS="-O3 -march=nocona"
export CXXFLAGS="-O3 -march=nocona"
export CPPFLAGS="-I$HOME/opt/usr/local/pcre2/include \
  -I$HOME/opt/usr/local/libressl/include \
./configure --prefix=$HOME/opt/usr/local/clamav --build=x86_64-apple-darwin`uname -r` \
  --with-pcre=$HOME/opt/usr/local/pcre2 \
  --with-openssl=$HOME/opt/usr/local/libressl \
  --with-zlib=$HOME/opt/usr/local/zlib \
  --disable-zlib-vcheck \
make install

6. Make sure to keep your database up to date


7. Now, you can scan your drive for viruses

cd $HOME
$HOME/opt/usr/local/clamav/bin/clamscan --log=$HOME/scan.log -ir $HOME

# if you want to scan your whole drive you need to run the thing as root
# I also suggest to exclude /Volumes, unless you want to scan your TimeMachine
# and all discs attached
# -i - report only infected files
# -r - recursive
# --log=$FILE - store output inside $FILE
# --exclude=$DIR - don't scan directory $DIR
cd $HOME
sudo $HOME/opt/usr/local/clamav/bin/clamscan --log=`pwd`/scan.log --exclude=/Volumes --exclude=/tmp -ir /

To compile ClamAV on macOS High Sierra I have used my old scripts, but many thanks go to:

macOS High Sierra and Quick Time Player issues

Recently (after upgrading to macOS High Sierra) I have noticed that playing videos has some flaws. There are small glitches in Quick Time Player (or some libs) that makes watching movies really painful.

You can notice these small, short, sound breaks. It’s like somebody is pressing pause button just for audio, while video is still rolling.

So far, I have no idea what is causing this one, but (at least) I have a solution for this situation:

With VLC I can play the very same video material without any issues at all.

Reproducible research

Reproducible research is quite important topic. Once you design, prepare, and run your experiment you should make sure it will be possible to reproduce it in the future. Ideally, anyone should be able to perform exactly the same type of experiment.

Arround 18 years ago, I started to develop: G(enetic) A(lgorithm) B(ack) P(ropagation). At that time, layout of Neural Network (layers, biases, and connections between neurons) was usualy taken as granted. To solve problem using NN you had to either use some structure described in some scientific paper or design it on your own. I have decided to test slightly different approach. I have decided to evolve Neural Networks.

Each Neural Network structure was evolving inside small, isolated, population maintained by Genetic Algorithm. After some period of time, best fitted individuals – ones that could solve the problem most efficiently – had a chance to migrate. This way, best structure for a given problem was growing slowly without any external intervention. Each, evolved, Neural Network was supposed to perform two tasks:

– learn to solve the problem – using input patterns from first set,
– solve the final problems – using input patterns from second.

In a sense, whole process was completely unsupervised. Neural Networks were completely random (at the beginning), and over time the optimal solution was emerging.

Recently, I have decided to check whether whole thing works or not. To my surprise, getting from the archive (where all sources and input files were stored) to running state was really simple task. Of course, it took some time to get familiar with documentation – yet again, to my surprise, it was quite good. It took some time to compile things (even though it worked almost out of the box), and it took some time to set initial parameters for the application. Anyway, what surprised me most was the cost of getting from zero to running application after more than seventeen years! I was able to reuse sample data, I was able to run experiments, and it simply worked as expected! The only difference I have noticed was the time needed to evolve optimal solution – algorithm performed way faster.

That’s what I call research reproducability. When somebody asks me:

“-Can I easily reproduce your experiment?”, I can give firm and confident answer,
“-Yes you can!”.

All I did was keeping close to standards and well established practices.

A few well-chosen test cases and a few print statements in the code may be enough.

Some programs are not handled well by debuggers: multi-process or multi-thread programs, operating systems, and distributed systems must often be debugged by lower-level approaches. In such situations, you’re on your own, without much help besides print statements and your own experience and ability to reason about code.

— The Practice of Programming – Brian W. Kernighan and Rob Pike

Make sure to look here if you are using R for your research: Reproducible Research. You can read a little bit about role of Software Engineers in research: here.

Calling shell process from Groovy script

If you want to run shell script from Groovy, you can easily achieve that using either ProcessBuilder or execute methods.

If you want to access environment variable (created inside Groovy) there are two ways:

– create completely new environment – risky, you can forget some essential variables
– create new environment based on parent’s one – preferred, simply add new variables

In both cases you need to manipulate Map<String, String> in order to add variables into environment.

Let’s say we want to run following code

8< --- CUT HERE --- CUT HERE --- CUT HERE --- CUT HERE ---


echo "Hello from script"

# variable called "variable" must be defined inside environment
echo $variable

8< --- CUT HERE --- CUT HERE --- CUT HERE --- CUT HERE ---

we can either use ProcessBuilder

8< --- CUT HERE --- CUT HERE --- CUT HERE --- CUT HERE --- 

// In this case I use ProcessBuilder class and inheritIO method
def script = "./"
def pb = new ProcessBuilder(script).inheritIO()
def variable = "Variable value"
Map<String, String> env = pb.environment()
env.put( "variable", variable )
Process p = pb.start()

8< --- CUT HERE --- CUT HERE --- CUT HERE --- CUT HERE ---

or, we can use execute method. It takes two arguments – environment (List) and execution directory name.

8< --- CUT HERE --- CUT HERE --- CUT HERE --- CUT HERE --- def script = "./" def variable = "Variable value" // we have to create HashMap from HashMap here! // Note the result of method! // // // // "Returns an unmodifiable string map view of the current system environment." // myenv = new HashMap(System.getenv()) myenv.put("variable", variable ) // we have to convert to array before calling execute String[] envarray = myenv.collect { k, v -> "$k=$v" }

def std_out = new StringBuilder()
def std_err = new StringBuilder()

proc = script.execute( envarray, null )

proc.consumeProcessOutput(std_out, std_err)

println std_out

8< --- CUT HERE --- CUT HERE --- CUT HERE --- CUT HERE ---

Fortran and GNU Make

Building binary file based on Fortran code that is organized in tree based source directories may be a struggle. Usually, you want to put all objects inside single directory while, at the same time, you would like to keep sources divided into some logical parts (based on source location and modules). Let’s say you have following source structure.

|-- Makefile
`-- src
    |-- a
    |   |-- a.f90
    |   `-- aa.F90
    |-- b
    |   |-- b.f90
    |   `-- bb.F90
    `-- main.f90

We have to sub-directories (with some logical elements of the code). In addition to that, there is a Makefile that will handle compilation, linking and archiving sources inside libraries. Code from two sub-directories: “a” and “b”, will be packed into liba.a and libb.a respectively. We wanto to do that, as we want to be able to re-use parts of the code somewhere else. In this case, liba.a will contain two modules that can be used in some other project. As for b, that’s not that obvious as it depends on a. Anyway, it’s a good idea to encapsulate parts of the code into some logical elements (libraries). This approach enforces proper API design and makes code more portable.

Now, to make things more complicated, source file a.f90 will declare module called “a_module” and source file aa.F90 will declare module “aa_module”. These modules will be used inside source codes: b.f90 and bb.F90.

Let’s take a look at source codes themselves.

8< - CUT HERE --- CUT HERE -- src/a/a.f90 -- CUT HERE --- CUT HERE --

! Source code of file a.f90
module a_module
    subroutine a
      write (*,*) 'Hello a'
    end subroutine a
end module a_module

8< - CUT HERE --- CUT HERE -- src/a/aa.F90 -- CUT HERE --- CUT HERE -

! Source code of file aa.F90
module aa_module
    subroutine aa
      write (*,*) 'Hello aa'
    end subroutine aa
end module aa_module

8< - CUT HERE --- CUT HERE -- src/b/b.f90 -- CUT HERE --- CUT HERE --

! Source code of file b.f90
subroutine b
  use a_module
  write (*,*) 'Hello b'
  call a
end subroutine b

8< - CUT HERE --- CUT HERE -- src/b/bb.F90 -- CUT HERE --- CUT HERE -

! Source code of file bb.F90
subroutine bb
  use aa_module
  write (*,*) 'Hello bb'
  call aa
end subroutine bb

8< - CUT HERE --- CUT HERE -- src/main.f90 -- CUT HERE --- CUT HERE -

! Source code of file main.f90
program main
  write (*,*) 'Hello main'
  call b
  call bb
end program


All these sources will be compiled using Makefile below. After compilation is done, you will get following structure:

|-- Makefile
|-- bin
|   |-- main
|   `-- main_lib
|-- include
|   |-- a_module.mod
|   `-- aa_module.mod
|-- lib
|   |-- liba.a
|   `-- libb.a
|-- obj
|   |-- a.o
|   |-- aa.o
|   |-- b.o
|   |-- bb.o
|   `-- main.o
`-- src
    |-- a
    |   |-- a.f90
    |   `-- aa.F90
    |-- b
    |   |-- b.f90
    |   `-- bb.F90
    `-- main.f90

To build everything, simply call

> make
> ./main
> make clean

And Makefile itself looks like this

8< --- CUT HERE --- CUT HERE -- Makefile -- CUT HERE --- CUT HERE ---

# Some helper variables that will make our life easier
# later on
F90         := gfortran
INCLUDE     := -Iinclude    # I am storing mod files inside "include"
MODULES_OUT := -Jinclude    # directory,  but you may  preffer  "mod"
LIBS 	    := -Llib -la -lb

# Sources are distributted accros different directories
# and src itself has multiple sub-directories
SRC_A           := $(wildcard src/a/*.[fF]90)
SRC_B           := $(wildcard src/b/*.[fF]90)
SRC_MAIN        := $(wildcard src/*.[fF]90)

# As we can have arbitrary source locations, I want to
# make rule for each  source location  our aim here is
# to put all object files inside "obj"  directory  and
# we want to flattern the structure
OBJ_A           := $(patsubst src/a/%, obj/%,\
                     $(patsubst %.F90, %.o,\
                       $(patsubst %.f90, %.o, $(SRC_A))))

OBJ_B           := $(patsubst src/b/%, obj/%,\
                     $(patsubst %.F90, %.o,\
                       $(patsubst %.f90, %.o, $(SRC_B))))

OBJ_MAIN        := $(patsubst src/%, obj/%, \
                     $(patsubst %.f90, %.o, $(SRC_MAIN)))

# this is just a dummy target that creates all the
# directories, in case they are missing
dummy_build_folder := $(shell mkdir -p obj bin include lib)

# There are two ways of building main  file.  We can do it
# by linking all objects, or,  we can  link with libraries
# these two targets will build main slightly different way
all: bin/main bin/main_lib

# This target builds main using object files
bin/main: $(OBJ_MAIN) $(OBJ_A) $(OBJ_B)
	@echo $^
	$(F90) -o $@ $^

# This one, uses libraries built from sources a and b
bin/main_lib: $(OBJ_MAIN) lib/liba.a lib/libb.a
	@echo $^
	$(F90) -o $@ $^ $(LIBS)

# Library "a" contains only codes from sub-tree "a"
lib/liba.a: $(OBJ_A)
	@echo $^
	ar -rs $@ $^

# Library "b" contains only codes from sub-tree "b"
lib/libb.a: $(OBJ_B) lib/liba.a
	@echo $^
	ar -rs $@ $^

# We have to provide information how to build objects
# from the sources
obj/%.o: src/**/%.[fF]90
	$(F90) $(MODULES_OUT) -o $@ -c $< $(INCLUDE)

# main is slightly different as it lays at different
# level
obj/%.o: src/%.[fF]90
	$(F90) $(MODULES_OUT) -o $@ -c $< $(INCLUDE)

# We can do some cleaning aftewards. Clean should leave
# the directory in such a state  that only sources  and 
# Makefile are present left there
	- rm -rf obj
	- rm -rf bin
	- rm -rf include
	- rm -rf lib


jshell and command line arguments

If you start your experience with jshell, you will notice that passing command line arguments to your script may be a struggle. Typically, you would expect something like this

> jsell my_script.jsh arg1 'some other arg' yet_another arg

to be working such way, arguments are passed to your script. This is not the case here. The reason for this is, jshell takes a list of files as arguments and parses them for it’s own.

However, you can overcome this issue. And, you can even make it very flexible thanks to Apache ANT. Make sure to get ANT (e.g. 1.10) and put it somewhere. Also, make sure to set ANT_HOME such way it points to your ANT installation.

Then, you can do following inside script

8< -- CUT HERE --- CUT HERE ---- jshell_script_file ---- CUT HERE -- CUT HERE ---

  class A {
    public void main(String args[]) {
      for(String arg : args) {
  new A().main(Commandline.translateCommandline(System.getProperty("args")));


and you can call it like this

# -R will pass arguments for runtime. In this sample we pass -D and it sets system property "args"
# to value 'Some arg with spaces' $SHELL $TERM some_other_arg
> jshell --class-path $ANT_HOME/lib/ant.jar \
  -R-Dargs="'Some arg with spaces' $SHELL $TERM some_other_arg" \
Some args with spaces

R3.4 + OpenMPI 3.0.0 + Rmpi inside macOS – little bit of mess ;)

As usual, there are no easy solutions when it comes to R and mac ;)

First of all, I suggest to get clean, isolated copy of OpenMPI so you can be sure that your installation has no issues with mixed libs. To do so, simply compile OpenMPI 3.0.0

# Get OpenMPI sources
mkdir -p ~/opt/src
cd ~/opt/src
curl "" \
  -o openmpi-3.0.0.tar.gz
tar zxf openmpi-3.0.0.tar.gz

# Create location for OpenMPI
mkdir -p ~/opt/openmpi/openmpi-3.0.0
./configure --prefix=$HOME/opt/openmpi/openmpi-3.0.0
make install

It’s time to verify that OpenMPI works as expected. Put content (presented below) into hello.c and run it.

/* Put this text inside hello.c file */
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
  int rank;
  int world;
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  MPI_Comm_size(MPI_COMM_WORLD, &world);
  printf("Hello: rank %d, world: %d\n",rank, world);

To compile and run it make sure to do following

export PATH=$HOME/opt/openmpi/openmpi-3.0.0/bin:${PATH}
mpicc -o hello ./hello.c
mpirun -np 2 ./hello

If you get output as below – it’s OK. If not – “Huston, we have a problem”.

Hello: rank 0, world: 2
Hello: rank 1, world: 2

Now, it’s time to install Rmpi – unfortunately, on macOS, you need to compile it from sources. Download source package and build it

mkdir -p ~/opt/src/Rmpi
cd ~/opt/src/Rmpi
curl "" -o Rmpi_0.6-6.tar.gz
R CMD INSTALL Rmpi_0.6-6.tar.gz \

As soon as it is ready, you can try whether everything works fine. Try to run it outside R. Just to make sure everything was compiled and works as expected:

mkdir -p ~/tmp/Rmpi_test
cp -r /Library/Frameworks/R.framework/Versions/3.4/Resources/library/Rmpi ~/tmp/Rmpi_test
cd ~/tmp/Rmpi_test/Rmpi
mpirun -np 2 ./ \
  `pwd`/slavedaemon.R \
  tmp needlog \
# If it works, that's fine. Nothing will happen in fact, it will simply run.
# Now, you may be tempted to run more instances (you will probably get error)
mpirun -np 4 ./ \
  `pwd`/slavedaemon.R \
  tmp needlog \
There are not enough slots available in the system to satisfy the 4 slots
that were requested by the application:

Either request fewer slots for your application, or make more slots available
for use.

# You can increase number of slots by putting 
# localhost slots=25
# inside ~/default_hostfile and running mpirun following way
mpirun --hostfile=~/default_hostfile -np 4 \
  ./ \
  `pwd`/slavedaemon.R \
  tmp \
  needlog \

Now, we can try to run everything inside R

> library(Rmpi)
> mpi.spawn.Rslaves()
There are not enough slots available in the system to satisfy the 4 slots
that were requested by the application:

Either request fewer slots for your application, or make more slots available
for use.
Error in mpi.comm.spawn(slave = system.file("", package = "Rmpi"),  :
  MPI_ERR_SPAWN: could not spawn processes

Ups. The issue here is that Rmpi runs MPI code via MPI APIs and it doesn’t call mpirun. So, we can’t pass hostfile directly. However, there is hope. Hostfile is one of ORTE parameters (take a look here for more info: here and here).

This way, we can put location of this file here: ~/.openmpi/mca-params.conf. Just do following:

mkdir -p ~/.openmpi/
echo "orte_default_hostfile=$HOME/default_host" >> ~/.openmpi/mca-params.conf

Now, we can try to run R once more:

> library(Rmpi)
> mpi.spawn.Rslaves()
	4 slaves are spawned successfully. 0 failed.
master (rank 0, comm 1) of size 5 is running on: pi
slave1 (rank 1, comm 1) of size 5 is running on: pi
slave2 (rank 2, comm 1) of size 5 is running on: pi
slave3 (rank 3, comm 1) of size 5 is running on: pi
slave4 (rank 4, comm 1) of size 5 is running on: pi

This time, it worked ;) Have fun with R!

C and controlling debug stuff (something, almost, like Log4j) ;)

In case you are not aware of the macros ;) I am sure you are aware, but, just in case :)

int f(int a, int b) {
  #ifdef PROFILER
  #ifdef PROFILER

Then, you can control it by doing this

// when you want to profile
#define PROFILER

// when you don't want to profile
//#define PROFILER

In case you are not aware of function pointers ;)

#include <stdio.h>

void (*doprofiling)(void);

void profile()
    printf("I am profiling\n");

void no_profile()
    printf("I am not profiling\n");

void fun() {

int main()
    doprofiling = profile;


    doprofiling = no_profile;


    return 0;

Then, you can switch in the code dynamically

gcc -o profile ./profile.c
I am profiling
I am not profiling

Or, you can use something like this, and you can apply different decorators to different functions

#include <stdio.h>

void doprofiling(void (*proffun)(void)) {

void profile()
    printf("I am profiling\n");

void no_profile()
    printf("I am not profiling\n");

void fun_prof() {
  void (*decorator)(void) = profile;

void fun_no_prof() {
  void (*decorator)(void) = no_profile;

int main()
    return 0;

And, you can still dynamically apply it in the code.

> gcc -o ./profile ./profile.c
> ./profile
I am profiling
I am not profiling

Compiling Slatec on macOS

# SLATEC Common Mathematical Library, Version 4.1, July 1993
# a comprehensive software library containing over
# 1400 general purpose mathematical and statistical routines
# written in Fortran 77.

If you want to install SLATEC, you need to make sure to install gfortran.

Take a look here for a brief instructions, somewhere in the middle of shell code:

Make sure to download sources and linux makefile and put all files at the same level. By saying at the same level, I mean that all *.f files from slatec_src.tgz and makefile and dynamic and static from slatec4linux.tgz are in the same dir.

SLATEC sources:
SLATEC makefile:

Before building library, make sure to export FC variable (it is needed by makefile)

export FC=gfortran

Make sure to change this line inside dynamic/makefile $(OBJ)
    $(CC) -shared -o $@ $(OBJ)

to $(OBJ)
    $(FC) -shared -o $@ $(OBJ)

Call make


Wait a little bit. Take a look inside static and dynamic, files should be there.

find . -name "libslatec*"

Now, you can try to perform make install (pay attention here as it will overwrite hardcoded locations). Alternatively, you can use

-L${WHERE_YOUR_BUILD_WAS_DONE}/dynamic -lslatec

SLATEC refers to symbols that can be found inside LAPACK package. If you don’t have it installed, take a look here

mkdir lapack
cd lapack
curl "" -o lapack-3.7.1.tgz
tar zxf lapack-3.7.1.tgz
cd lapack-3.7.1
ln -s

After compilation is done, you can find liblapack.a inside lapack-3.7.1.

Jekyll on macOS

1. Install Ruby

I prefer to install from sources:

> ./configure --prefix=$HOME/opt/ruby
> make
> make install

2. Install RubyGems

I prefer to install from sources:

> export PATH=$HOME/opt/ruby/bin:$PATH
> ruby setup.rb

3. Change repo location

I compile Ruby without ssh support. I have to change the ruby gems repo

> gem sources -r
> gem sources -a

(if you want to build with OpenSSL, take a look here: There is a sample related to building OpenSSL at macOS)

4. You can install Jekyll

> gem install jekyll
> jekyll --version
jekyll 3.5.2

That’s it.

Source: Originally, posted here:

git, github and multiple users

Different solutions for working with github repos when you have multiple accounts

If you have two user accounts at github: {user_a} and {user_b}. Sometimes, you may encounter issues while pushing changes back

remote: Permission to {user}/{repo}.git denied to {user_a}.
fatal: unable to access '{user}/{repo}.git/': The requested URL returned error: 403

If you are using https, you can play with .git/config

> git clone{user}/{repo}.git
> vi .git/config

_-= Inside VIM =-_
# you may enforce user here
[remote "origin"]
        url = https://{user_a}{user}/{repo}.git

If you are using ssh, you can play with .ssh/config

Host {user_a}
  IdentityFile ~/.ssh/id_rsa_a

Host {user_b}
  IdentityFile ~/.ssh/id_rsa_b

  User git
  IdentityFile ~/.ssh/id_rsa_a

  User git
  IdentityFile ~/.ssh/id_rsa_b

    User git
    IdentityFile ~/.ssh/id_rsa_a

    User git
    IdentityFile ~/.ssh/id_rsa_c

SVN and meld

> svn diff --diff-cmd='meld' file_you_have_changed.c

SVN and multiline log comments

> svn commit -m"We tend to forget\
that multiline comments, in SVN, are supper easy.\
And, they can help to introduce multiple changes\
inside one commit. Like:\
- improvements,\
- bug fixes,\
- motivations behind the code." my_super_file.c

Parameter Expansion – POSIX (bash)

                     |       parameter      |    parameter    |    parameter    |
                     |   Set and Not Null   |   Set But Null  |      Unset      |
| ${parameter:-word} | substitute parameter | substitute word | substitute word |
| ${parameter-word}  | substitute parameter | substitute null | substitute word |
| ${parameter:=word} | substitute parameter | assign word     | assign word     |
| ${parameter=word}  | substitute parameter | substitute null | assign word     |
| ${parameter:?word} | substitute parameter | error, exit     | error, exit     |
| ${parameter?word}  | substitute parameter | substitute null | error, exit     |
| ${parameter:+word} | substitute word      | substitute null | substitute null |
| ${parameter+word}  | substitute word      | substitute word | substitute null |


R 3.4, rJava, macOS and even more mess ;)

So, you want to have rJava (e.g. rJava_0.9-8.tar.gz) inside your fresh, new installation of R 3.4 and you are running macOS. There are bad news and good news ;)

Bad news. It will fail with default clang that comes with XCode. You need something better here, something with support for OpenMP. And you can get it following way

Make sure to take a look here as well:

# make sure to create some place where you want to have it (I, personally, put stuff into ~/opt)
> mkdir ~/opt
> cd ~/opt
> curl \
-o clang+llvm-4.0.1-x86_64-apple-darwin.tar.xz
> tar xf clang+llvm-4.0.1-x86_64-apple-darwin.tar.xz

Now, make sure to install gfortran. I am using version from this location: GFortran.

Once you have it, make sure to install most recent JDK from Oracle. You can find it here: JavaSE.

After you have your Java installed, make sure to do following (double check that everything is OK so far)

> R --version
R version 3.4.1 (2017-06-30) -- "Single Candle"
Copyright (C) 2017 The R Foundation for Statistical Computing
Platform: x86_64-apple-darwin15.6.0 (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under the terms of the
GNU General Public License versions 2 or 3.
For more information about these matters see
> /usr/libexec/java_home -V
Matching Java Virtual Machines (3):
    1.8.0_144, x86_64:	"Java SE 8"	/Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home
    1.8.0_111, x86_64:	"Java SE 8"	/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home
    1.7.0_80, x86_64:	"Java SE 7"	/Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home


# Make sure to put following line inside you ~/.profile
# export JAVA_HOME=$(/usr/libexec/java_home -v 1.8.0_144)
# close and re-open Terminal session. Now, you should be able to see:
> echo $JAVA_HOME

Make sure to enable your JDK for JNI

> sudo vi /Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Info.plist

# make sure to replace
#  <string>CommandLine</string>
# with
#  <string>CommandLine</string>
#  <string>JNI</string>

Now, it’s time to configure R. Make sure to run following command.

!! (note that we change JAVA_HOME to JRE) !!

> sudo R CMD javareconf \
JAVA=${JAVA_HOME}/../bin/java \
JAVAC=${JAVA_HOME}/../bin/javac \
JAVAH=${JAVA_HOME}/../bin/javah \
JAR=${JAVA_HOME}/../bin/jar \
JAVA_LIBS="-L${JAVA_HOME}/lib/server -ljvm" \
JAVA_CPPFLAGS="-I${JAVA_HOME}/../include -I${JAVA_HOME}/../include/darwin"

Note! It looks like R CMD javareconf doesn’t update all the flags, make sure that file: /Library/Frameworks/R.framework/Versions/3.4/Resources/etc/Makeconf contains following entries:

JAVA_LIBS="-L${JAVA_HOME}/lib/server -ljvm" \
JAVA_CPPFLAGS="-I${JAVA_HOME}/../include -I${JAVA_HOME}/../include/darwin"

You should have your Java and R linked together. It’s time to add support for clang that you have just installed few steps above.

Let’s say your clang is in your home dir. Here: /Users/user_name/opt/clang+llvm-4.0.1-x86_64-apple-macosx10.9.0/

Make sure, to add following file (~/.R/Makevars – more info can be found here: R for Mac OS X FAQ and here: R Installation and Administration)


Also, make sure to modify file:


inside this file, make sure that line “LDFLAGS” reads

LDFLAGS = -L/usr/local/lib -L/Users/user_name/opt/clang+llvm-4.0.1-x86_64-apple-macosx10.9.0/lib -lomp

You can confirm that it works by calling:

# you can confirm that by calling
> R CMD config --ldflags
-fopenmp -L/usr/local/lib 
-lomp -F/Library/Frameworks/R.framework/.. -framework R 
-lpcre -llzma -lbz2 -lz -licucore -lm -liconv

Now, it’s time to get rJava. Simply download it, and install it:

> curl "" -o rJava_0.9-8.tar.gz
> R CMD INSTALL rJava_0.9-8.tar.gz
* installing to library ‘/Library/Frameworks/R.framework/Versions/3.4/Resources/library’
* installing *source* package ‘rJava’ ...
*** installing help indices
** building package indices
** testing if installed package can be loaded
* DONE (rJava)

And you can now test it. Let’s say you have (in your working directory) following layout


and file contains

package utils;

public class RUsingStringArray {
  public String [] createArray() {
    System.out.println("Creating empty array");
    return new String[0];

You can compile it and run it in R

> javac utils/*.java
> export CLASSPATH=`pwd`
> export JAVA_HOME=$(/usr/libexec/java_home -v 1.8.0_144)/jre
> R
> library(rJava)
> .jinit()
> obj <- .jnew("utils.RUsingStringArray")
> s <- .jcall(obj, returnSig="[Ljava/lang/String;", method="createArray")
> .jcall(obj, returnSig="I", method = "arrayLen", s)
Class: class [Ljava.lang.String;
[1] 0

Retro Like (8bit) Terminal Game – playing for Fun and Profit :)

So, what do you think about playing retro style, terminal based, game where you can gain some real life experience as well. If you are interested, take a look below (click the image for the full experience – or click here for full screen size).

I find Cathode to be supper fancy terminal emulator with element of surprise. Whenever you want to gain some attention during presentations it comes handy. There is nothing as good as “Back to the Future” effect when you want to gain some attention from the audience. Especially, when your terminal session itself is quite boring. On regular basis, I definitely prefer iTerm2. But for the looks, Cathode plays it’s role.

As for the resources.



Type speed instructions for macOS: Building GNU Typist for Mac OS

Sounds: AtomSplitter

How to share enum between C++ and Java ;)

Note! This is not actual suggestion for proceeding like this during development ;) It’s one of this “fake it till you make it” solutions. So, please, don’t try it at home! ;)

There was this question at Stack Overflow. How to share enums. And you want to share them between C++ and Java. Well, there is one issue here. You can’t base on comments (to trick compiler) as Java and C++ have the same comment schema. How to use comments to make cross language coding? Take a look here.

You can use a nasty hack with “disabling” public keyword in C++ using macro.

#include <stdio.h>

#define public
#include ""
#undef public

int main() {
  A val = a;
  if(val == a) {
  } else {
    printf("Not OK\n");

Then, inside Java class you can define enum

enum A {

And you can use it in Java later on. As you can see above, it is used in C++ as well.

public class B {
  public static void main(String [] arg) {
    A val = A.a;
    if(val == A.a) {
    } else {
      System.out.println("Not OK");

Seems to work fine :)

> javac *.java
> java B
> g++ -o main ./
> ./main