This is a title, for a blog, this blog, Welcome

FLOSS, Programming, Server Administration, Libreoffice and Electronics!

If you want something said, ask a man.

If you want something done, ask a woman.

If you want something neither said or done, ask a cat.

Craig Ferguson

How to save vim sessions

Introduction

I was doing some research on how to save sessions on my laptop.

I used to be able to just use the hibernation feature of my previous laptop to continue my tty and graphical sessions, but unfortunately it’s not very reliable.

In my experience, you really have to have specific GPUs, CPUs and such for it to be stable enough to be an option.

I was looking into tmux which could have had some sort of session save feature and I just couldn’t find one.

So what else do we have?

A Solution

Then I found an article that explained how to save vim tab sessions.

It’s actually just a matter of adding script entries to your ~/.vimrc file.

The How

Do note that the code is not from me, I made some minor changes, but I took it in some article.

Here’s the link to the article : [Link missing in plain sight. The link was here, someone must have stolen it! Police!] (Yeah sorry, just can’t find it)

here’s the code which has to be appended to your ~/.vimrc :

" START Session saving for vim

" Creates a session
function! MakeSession()
  let b:sessiondir = $HOME . "/.vim_sessions" . getcwd()
  if (filewritable(b:sessiondir) != 2)
    exe 'silent !mkdir -p ' b:sessiondir
    redraw!
  endif
  let b:filename = b:sessiondir . '/session.vim'
  exe "mksession! " . b:filename
endfunction

" Updates a session, BUT ONLY IF IT ALREADY EXISTS
function! UpdateSession()
  if exists("g:sessionSaved")
      let b:sessiondir = $HOME . "/.vim_sessions" . getcwd()
      let b:sessionfile = b:sessiondir . "/session.vim"
      if (filereadable(b:sessionfile))
        let b:filename = b:sessiondir . '/session.vim'
        exe "mksession! " . b:filename
      endif
  endif
endfunction

" Loads a session if it exists
function! LoadSession()
  if argc() == 0
      let b:sessiondir = $HOME . "/.vim_sessions" . getcwd()
      let b:sessionfile = b:sessiondir . "/session.vim"
      if (filereadable(b:sessionfile))
        let g:sessionSaved=1
        exe 'source ' b:sessionfile
      else
        echo "No session loaded."
      endif
  endif
endfunction

au VimEnter * nested :call LoadSession()
au VimLeave * :call UpdateSession()
" Save a session by doing : \m
map <leader>m :call MakeSession()<CR>

" END Session saving for vim

Using The Vim Session script

At this point, you can save a session by doing : <leader> + m where <leader> is usually : Left Alt + \

This saves the place you are in the current vim tab, the open tabs and the place you were in all the other tabs.

You have to leave your session by doing :

:qa

At this point, if you close your tabs or change stuff, it will be saved automatically so :qa is pretty much the only way you have to exit vim (if you want to keep your current state).

When you want to recover your state, just change your directory to the one from where you opened your initial vim and just type that :

vim

This should reopen your previous state. Pretty neat!

Conclusion

Try it! Enough said! :D


Autoconf projects compiled statically

Introduction

Have you ever copied an executable over to another system only to find out that it won’t work because some shared object is not of the exact version you have on your system?

Yes, dynamically linked programs are lean and they are usually compact, but they do have certain issues, especially when copying them over to another system and even when updating, package managers do make errors sometimes or it could be that you manually compiled a project (to be on the bleeding edge) but now they no longer work as some dependency got updated.

This means that the project has to be recompiled and reinstalled.

Of course you could hack this by symlinking the shared object to the older version but your mileage will vary and there can be some sad results.

There is another way.

A Solution

Statically linking executables! Yes, the result executable is bigger (and sometimes much bigger) than their dynamic counter parts, but that’s pretty much the only drawback of using this technique. It is also possible to offset this drawback by using a different C library like musl which may well create a smaller executable than its dynamic counter part. (Just make sure to strip the executable for the best size results)

It’s so nice to be able to use executables anywhere without always having to provide any dependencies (shared objects).

Statically linked executables are simply an executable that contains all the library dependencies that they depend on.

The How

It’s as simple as adding a single compilation flag to gcc or clang, you just add the ‘-static’ flag and voilà!

For projects that use Makefile or such we just have to set ‘-static’ to both CFLAGS and LDFLAGS and that works everywhere!

Not… we sure wish that was the case but it’s not.

One of the place where this doesn’t work is with projects that uses Autoconf and especially with libtool.

Autoconf with libtool

Do note that many projects support a configure flag to compile statically so it’s important to check that first, before using this technique.

Libtool is a linker/compiler superscript that helps with compatibility. (Don’t throw rocks at me please, I’ve never used libtool with my own projects so my understanding of it is limited.)

Now the issue here is that libtool does not honor the ‘-static’ flag. It just drops that flag. It does have a (kind of hidden) way to reactivate that flag for the underlying gcc call. Here’s how :

CFLAGS="-static" LDFLAGS="-static" ./configure <your configure flags here>

and then when you use make :

CFLAGS="-all-static" LDFLAGS="-all-static" make <your make flags here>

This should ensure that the resulting executables are compiled statically.

You can’t use “-all-static” for the configuration phase as this will break the tests. It must be because gcc does not support “-all-static” :).

Conclusion

I just want to make it clear that I’m not a “statically link everything!” advocate.

Dynamically linked executables are the de facto method to use on a host system for various very good reasons.

One of them is actually a legal one as the GPLv2 (and later) does not allow libraries to be statically linked into an executable.

Do note that the LGPLv2 does allow their libraries to be statically linked as far as I know. The LGPLv3 does not seem to allow it.

The goal of this article is strictly in terms of convenience when deploying containers/jails.

As a rule of thumb, it is considered bad practice to package statically linked executables for distribution to end users.

If you absolutely want to make sure that your package contain all or certain dependencies, just include the shared objects that it depends on.

Statically linked executables are more of a black box than anything else and it’s better not to trust statically linked executables distributed by peers you don’t trust.


JailTools

Who needs chroots? Security? What is that?

I’ve first been interested in chroots when a friend of mine, renting a server, found out that their php installation had been cracked into. They noticed their server was doing abnormal uploading and eventually found that a plethora of files were hidden somewhere.

Chroots, a -first- -second- -whatever- line of defense

Chroot, now that sounds like the sound you did in the nintendo game Super Mario Bros 2 when you picked a vegetable! But I digress.

Why chroots? What are they? Well, technically they make it possible to change from where the root directory points to. If say you have an empty directory named buzzYa and you somehow managed to chroot into it, you would end up with nowhere to go. Is it that easy to setup? Absolutely not! Are chroots enough to warrant enough security against adversaries? Absolutely not! They help, but by themselves they are almost useless. I’ll get on that later.

Chroots are a pain to set-up under GNU/Linux. At first, I started by manually making a few directories and copying the files I wanted to them. Painstakingly trying to keep up to date with the updates and having issues actually starting the thing. The thing is, you pretty much need to recreate a minimum GNU/Linux distribution inside your chroot. You need a minimal set of /etc/ files, a minimal set of devices in /dev/ and all the libraries necessary to run your program (granted it’s not statically linked). My point is that creating chroots manually is too error prone and frustrating to be attempted.

That’s when jailTools was born. With just shell scripts, at first, I just made it create a list of basic filesystem directories and copied over some shared objects and binaries. Then, I created a script to copy the libraries and binaries to what I call jails (taking the term from freeBSD) and making sure to also copy the shared objects they depend on along from the base system.

At that point, I was copying over the binaries and libraries from the base system. This actually created jails of around 50MB maybe even 100MB. bash by itself requires quite some dependencies. You also need specific (special) shared objects too like libnss which are almost always loaded at runtime rather than being a direct dependency. There are also directories that are necessary like /usr/share/locale /usr/lib/locale /usr/lib/gconv and if you want ssl/tls support : /etc/ssl and /usr/share/ca-certificates.

The code base

Just check it out and take it for a spin : jailTools repository

It’s a work in progress, there are a lot of features that are being planned.

In Retrospect

This article is not at all what I wanted it to be, I wanted it to be a complete introduction to my project jailTools but I ended up having various unfinished parts everywhere, totally unable to gather the courage to fully finish it.

So I decided to finally just trim the unfinished part and post it as is.

I’ll post further design implementation details in further posts.

Just bear with me :)


A Dashing entry!

No! Not that shell!!!

My experience with dash has always been one of hatred. I didn’t quite felt the emotion myself, but I always felt that dash hated me and my scripts. Any script I would throw at it, it would chew back with tons of errors and I’d swear maybe even arrogance back at me. Being in such a situation with it, I was totally baffled that people could actually use that… devil shell!

In my scripts, to avoid the pain of it’s draconian touch I would usually do this :

case "$(readlink -f  /proc/$$/exe)"; in
    *dash)
        echo "We don't support dash"
        exit 1
    ;;
    
    *)
        sh="$(readlink -f /proc/$$/exe)"
    ;;
esac

I would then have the insurance that my scripts would not go through this (insert curse here) shell!

One day

as I was working with jailTools, FrozenFox (A fellow friend on freenode’s IRC channel #xroutine) pointed out that I should try to stick as close as possible to POSIX compatible shells as to support as many shells as possible. To my bafflement FrozenFox mentionned the shell `dash’ as being one of the most POSIX conforming shell! So I took a (very) deep breath and decided to finally give that shell a chance. What seemed like mountains before were actually just technicalities after all. It turns out I wasn’t that far off from the POSIX style after all.

Some of The Changes I had to do

It turns out that the major “issue” with dash is the fact that it does not support the “function” bash way of creating functions.

Instead of doing :

function foo() { 
..
}

We should do :

foo() {
..
}

With this change, the bulk of my scripts were actually working correctly! I could finally bury the hatchet of war and support dash. Sure, it doesn’t support any bashisms like Substring expansion, but that is easily fixed with sed.

Where we do this in bash :

a="thisVeryLongVar"; echo "${a:0:4}${a:8}"

Giving the result :

 "thisLongVar"

We would need to create our own function using sed :

# substring offset <optional length> string
# cuts a string at the starting offset and wanted length.
substring() {
    local init=$1; shift
    if [ "$2" != "" ]; then toFetch="\(.\{$1\}\).*"; shift; else local toFetch="\(.*\)"; fi
    echo "$1" | sed -e "s/^.\{$init\}$toFetch$/\1/"
}

a="thisVeryLongVar"; echo "$(substring 0 4 $a)$(substring 8 $a)"

And it would give the same result :

"thisLongVar"

A bit more code to do the same thing, but it should be portable to pretty much all the other shells out there.

Now the next issue was with the command `read’. Under dash read is not as featureful as in bash/zsh, under these, we could do :

read -d "" myList << EOF
entry one
entry two
entry three
EOF

In dash, read does not feature ‘-d’, but suprisingly we can do this instead :

myList=$(cat << EOF
entry one
entry two
entry three
EOF
)

and it accounts to exactly the same thing.

dash also doesn’t have the environment variable UID :

uid=$UID

so instead we use ‘id’ :

uid=$(id -u)

The outcome

So the mountain I thought was the Everest, is now just a petty mound. A few changes and we can use this very fast shell. Sure, there’s nothing that will make someone actually use it as their actual command line shell as it doesn’t support a history and many other features we take for granted in the other shells. Let’s keep things to their strengths, shall we? Dash is meant for running scripts and it does it well.

I now converted jailTools to fully support dash thanks to these changes and they lived happily ever after, having the task to convert all of their scripts to support dash in the future. ^_^


Free SSL/TLS certificates with Let’s encrypt!

It is now possible to have a fully secure internet for totally free!

(I mean this in the grand scale of things, not just downsizing the word “internet” to mean “http”, eheh)

(it does sound preacherlike, but I am very excited by this :D)

This text describes one of the many ways to implement secure HTTP for lighttpd and STARTTLS for postfix and courier-imap.

Some background information on secure HTTP

Domain certificates used to cost at least 100$ (probably more!) per year minimum to get SSL/TLS keys for a single domain (https://)! Now, with let’s encrypt, this is totally free! And the certificates are fully legitimate. This is not just some self signed certificate that only certain browsers support. This is fully compatible with a lot (I wanted to write all, eheh, but I’m staying prudent) of the most important pillars of the current internet.

There are absolutely no reasons not to do this. It might become mandatory for websites to offer secure http soon; There are talks that every websites will need to support encrypted HTTP on the internet. While this will probably never happen, the fact that this can be done for free will certainly be a major stepping stone for this important (read very!) vision.

Here are some reasons why implementing secure HTTP is important :

  • To seriously limit Man in the middle attacks (eavesdropping).
  • To have much more certainty that the website we want to use is indeed the one we think it is.
  • Encryption is great :D.

Things you need to follow this guide

  • a server you own/rent
  • at least one domain you own
  • The GNU/linux operating system
  • more than just basic system usage knowledge (this is intended for system administrators after all)

on your system:

  • use lighttpd as your web server (although we give pointers for the others)
  • git
  • bash or zsh, curl, sed, etc.

optional parts :

  • postfix
  • courier-imap

Let’s Encrypt

Let’s encrypt provides certificate signing using the ACME protocol (Nothing has be done on their website; there is no need to create an account and give out sensitive private information).

There are quite a few different clients that were made to interface with let’s encrypt’s ACME server instance. This link contains quite a few different options. I chose dehydrated, mainly because it is implemented fully as a shell script. It can be ran whenever we wish (like put in a cron script) and is fairly easy to use.

Dehydrated, an ACME client

To get dehydrated, we’ll simply clone their git repository from github.

So just clone the dehydrated git repository.

There’s only 2 configuration files we have to edit to get rolling. (You can also set up a shell script file called ‘hook.sh’ if you need to do more finegrained steps in your certificate signing process)

The first is the file “domains.txt” : This file should start with the domain name, followed by the full URL of all the subdomains for this domain separated by space. Each line should be for a single domain and it’s subdomains

example content of “domains.txt” :

foobar.org www.foobar.org test.foobar.org
kernel.org www.kernel.org git.kernel.org bugs.kernel.org
example.org www.example.org mail.example.org

The second is the file “config” :

we want to start with the example from “docs/example/config”

add this :

CA="https://acme-staging.api.letsencrypt.org/directory"

just after :

#CA="https://acme-v01.api.letsencrypt.org/directory"

This will make the us request the staging (test) server rather than the production one.

(very important! It is very easy to be temporarily banned and it takes quite some time to be allowed afterwards)

Later on, when everything works correctly with our staging dummy certificates, we can comment the first one and uncomment the second. (We’ll get on that later)

Ownership verification

To be awarded a signed certificate, ACME needs to make sure that we are indeed the owner of the domain name (and of the website). ACME ownership verification can be done using 2 methods with the particular client we will show in this guide. A website method and a dns method. We will go over only the website method in this guide.

(Note that the website method is not technically a perfect way to ensure that a server is really controlled by the person owning the domain name. Strictly because it would be possible for someone controlling a web server to use these methods to gain a certificate for a domain name they do not own (granted the domain name points to that server’s IP). However, it is fully accepted by the ACME protocol and it is the method we will show in this guide)

first uncomment this line in the “config” :

#WELLKNOWN="/var/www/dehydrated"

You can change it, just make sure that your lighttpd has read access to it and that the user with which you will run the dehydrated script has read and write access to that directory. This is where the test data will be posted. The ACME server will then need to have HTTP access to files in this location (files in it will have some kind of hashed name approximately of 10 random characters).

In lighttpd (or any web server for that effect), we will need to make the path “/.well-known/acme-challenge/” point to “/var/www/dehydrated” and make any external source able to read it’s content.

To do that reliably on all domain names, we will make use of lighttpd’s “mod_alias” plugin, so you need to make sure it is activated in your “server.modules”. Here’s what you need to add to your lighttpd configuration file :

# ACME challenge for HTTPS
alias.url += (
    "/.well-known/acme-challenge/" => "/var/www/dehydrated/"
)

With this, for whatever domain they will access, the URL “/.well-known/acme-challenge” will point to “/var/www/dehydrated”. So say “http://foobar.org/.well-known/acme-challenge/” must be the content of “/var/www/dehydrated” (I don’t mean that it should actually “show” the content, just that if an external source knows what to look for, it should be able to download it from there).

Test the whole ordeal with the staging Let’s encrypt

Now that we have this set up, it’s time to test! Double check in the section dehydrated that you effectively have your CA set to the staging URL.

Now, rubs hands, just run this :

./dehydrated -c

It will verbosely show the process of getting the certificates signed and all. It will also say if you did a correct job setting your “Wellknown” set up.


To Do list in this entry

  • domains (domains you own obviously) (done)
  • staging (very important! It is very easy to be temporarily banned and it takes quite some time to be allowed afterwards) (done)
  • ACME website verification (also can be done with dns) (done)
  • “Wellknown” verification method (done)
  • run dehydrated to get mock certificates from the staging letsencrypt when all the domains get verified and get certificates, we can run the real deal.
  • At this point, we are given quite a few file types. What we want are : the private key, the host certificate and the chain certificate. And we mainly need all these in “pem” format. (The fullchain is simply the chain and host certificate copied together, one after the other).

lighttpd postfix courier-imap

great tool to help know what to put in your HTTP server configuration : https://mozilla.github.io/server-side-tls/ssl-config-generator/ disable SSL3, why (http://disablessl3.com/) test with ssllabs.com (https://www.ssllabs.com/ssltest/)

forward secretive ciphers/key echange : what it means

to test pop3 starttls openssl s_client -starttls pop3 -connect URL:110 test smtp starttls openssl s_client -starttls smtp -connect URL:25 test shttp openssl s_client -connect URL:445

list all compatible ciphers with nmap! nmap –script ssl-enum-ciphers -p PORT DOMAIN # when it is a non official port nmap -sV –script ssl-enum-ciphers -p PORT DOMAIN


LibreOffice’s Basic - Objects position and size

How do we change the size and position of an object?

Normally we do that with the GUI, but here’s how to do this programmatically using Basic.

standard component properties are in :

thisComponent.DrawPage.Forms.getByName(<form name>).getByName(<component name>)

Theoretically, its position and size should be there too, right? Unfortunately, no. The positionning and size properties are elsewhere. They are in thisComponent.DrawPage.getByIndex(). The index is not the same as in the normal Xform component though, so you can’t just fetch the index there and search that index in DrawPage.

Certain (and yes, it’s really certain components) contain a variable called “Control” that links to the above component element. So you can actually get the name of the component. The problem is that not all elements in the indexes actually contain the variable “Control”. We actually have to filter all the elements by their ShapeType. Say we loop all elements to look for say lblTitle. Our looping variable is ‘i’. We would do something like this :

if obj.getByIndex(i).getShapeType() = "com.sun.star.drawing.ControlShape". If that is true, then

we are garanteed that the variable Control is available. We then do something like this :

if obj.getByIndex(i).Control.Name = "lblTitle" then ...

At that point, we can look up exactly the element for which we want to either get or set the position or size. The 2 variables we are interested in are : Position and Size. Position contains X and Y and Size contains width and height. You just need to assign position or size to an Object variable then you can change it’s value and push it back into the component. Like so :

(say the component object we got from earlier was put into the variable shapeObj)

Dim oPos AS Object
Dim oSize AS Object

oPos = shapeObj.Position
oSize = shapeObj.Size

oPos.X = oPos.X + 5
oSize.width = oPos.width + 5

shapeObj.Position = oPos
shapeObj.Size = oSize

And that’s it! You can move and resize any of your components from your scripts!

Happy Hacking!