i3 shortcuts cheatsheet

i3 is super cool and fully customizable tiling window manager, but can be a bit hard to start with, so I prepared visual cheat sheet with the default shortcuts.

i3 shortcuts cheatsheet layout

You can download the xcf file if you want to edit and improve it with GIMP.

Press “prefix” (Win or Alt depending on your config) and the key you want to activate the action.

The main problem for me is that I am too used to VIMs hjkl navigation and i3 uses H for “horizontal split”, so I had to remix the layout a little bit.

Adding some other tweaks, here is my layout:

You can read the exact config file in my dotfiles repo.

Controlling Snapmaker 3D printer with Luban over web interface

I am using Snapmaker A250 which is pretty nice machine – 3D printer, CNC and laser cutter in one.
By default it comes with a desktop software for Windows, Mac and Linux, but I needed something more.

The problem

Sometimes 3D prints just fail for various reasons and if you are not around or you do not pay attention, the machine can be damaged.
I learned this the hard way (nothing fatal, just a lot of cleaning, replacing the nozzle, etc).

It is generally recommended to take a look at the print every 15 or so minutes, but sometimes I print stuff that take many hours and I cannot be tied home all day.
What I needed was a way to remotely monitor the machine and stop it in case things are going south.

Here is the plan:

  1. Get old PC, install Linux and Snapmaker Luban (did I mention everything we’ll talk about is open source)
  2. Configure Snapmaker Luban to run as a service with systemd inside fake X
  3. Setup USB driver and connect PC to the printer
  4. Setup webcam streaming software
  5. Configure network tunneling

Install Linux

I installed ArchLinux because that’s why !
Seriously, I like the distro and it already has Luban in AUR give it a try.
You can install it on any other supported OS or distribution by downloading the software from the luban.xyz/, but ..

on Arch with yay all you need to do is:

yay -S snapmaker-luban

and you are done.

Snapmaker Luban as a web service

Snapmaker Luban is Electron app, which means that it requires graphical interface to start, but we are going to trick it (just a little bit).

Install xvfb (X virtual framebuffer):

sudo pacman -S xorg-server-xvfb

By default on Arch, Luban is installed under /opt/Snapmaker Luban.

Lets create a file called /opt/snapmaker-luban-headless.sh that would look like this:

#!/usr/bin/env bash
export DISPLAY=:1
Xvfb $DISPLAY -screen 0 1024x768x16 & \
    /opt/Snapmaker\ Luban/snapmaker-luban \
            --port 8082  \
                --host \

and make it executable:

sudo chmod +x /opt/snapmaker-luban-headless.sh

Create the service file /etc/systemd/system/luban.service that looks like this:

Description=Luban 3d



and enable the service:

sudo systemctl enable luban

When you restart the computer, Luban will start and the web interface will be accessible on port 8082.

Setup USB driver and connect PC to the printer

In my case I had to install additional driver, after several failed attempts I found that the patched version by github.com/juliagoda/CH341SER works for me.

If you don’t have your Linux header files, you’ll have to also install them:

sudo pacman -S linux-headers

then clone the repo

git clone https://github.com/juliagoda/CH341SER.git

and follow the instructions.
In the most straight forward case it should look like this:

sudo pacman -S arduino arduino-docs avr-binutils avr-gcc avr-libc avrdude
cd CH341SER
sudo make load
find . -name *.ko | xargs gzip
sudo cp ch34x.ko.gz /usr/lib/modules/$(uname -r)/kernel/drivers/usb/serial

I would suggest to read the whole README file anyway.

Restart the computer, open a browser at http://address:8082, go to 3D printing -> Workspace, select /dev/ttyUSB0 and click Connect.
If everything goes according to plan you are going to get successful connection, which means you can start controlling your printer remotely.

Setup webcam streaming software

The simplest way to get this going is with motion.

On Arch you can install it via:

sudo pacman -S motion

and enable the service:

sudo systemctl enable motion

the config file is at /etc/motion/motion.conf.

Most webcams will run out of the box.

You don’t have to, but if you need you can play a little bit with the configuration.

I personally increased the stream refresh rate by adding this to the file:

stream_maxrate 90

so the picture is not that choppy. This of course increases the generated network traffic.

Configure tunneling

This is optional, if you are going to use the web interface only on your WiFi, you don’t need it.

To make things accessible from anywhere I configured the machine to ssh tunnel the ports to a proper server that I have in data center.

It it quite easy with autossh.

Create a file /etc/systemd/system/autossh-motion.service containing:

Description=Tunnel webcam port to X

ExecStart=/usr/bin/autossh -M 0 -NR 8081:localhost:8081 -o TCPKeepAlive=yes example.com


and /etc/systemd/system/autossh-luban.service with:

Description=Tunnel Snapmaker Luban port to X
After=network.target luban.service

ExecStart=/usr/bin/autossh -M 0 -NR 8082:localhost:8082 -o TCPKeepAlive=yes example.com


Notice that the tunnel will wait until Luban service starts

then just enable the services:

sudo systemctl enable autossh-motion
sudo systemctl enable autossh-luban


I am quite happy with the result, even if the power goes out, the machine will boot, connect to the remote server and allow me to monitor and control the printer in real time.

How Docker Swarm made me realize that I am doing secrets wrong

The first thing most people do in a new project is to create config file and then immediately put it in .gitignore.
Of course, you do not want to have potentially sensitive data under revision control.
Another place where we do not want secrets is in our Docker images. Don’t forget to add it to .dockerignore, I’ve made that mistake.

So there is a conflict, we instinctively want to read from files (because it is easy), but we do not want that file to end in the wrong place.

“Well, the solution is simple, just use environment variables” – you’ll probably say and you’ll be right. After all that’s what the the 12 factor app told us to do.
As a compromise we usually setup our apps to load configuration and secrets from env variables and we load the env variables from .env file – good compromise.

Even in k8s environment we were used to “map” Kubernetes secrets to environment variables of the pod.
Docker Compose also embraces this approach to the point that if you want you can directly reference values from .env in docker-compose.yaml.

So imagine my surprise when I realized that Docker Swarm does not support this feature. Even more, the docs say that they intentionally chose not to do it.
At first I was trying to fight this decision, I went through denial, anger, bargaining, etc. but then it hit me, it is actually a brilliantly simple solution for a hard problem.

What Docker Swarm is doing to store your secret encrypted and when the container starts, Swarm will mount it as a file under /run/secrets and then you can read it.
First of all is super simple and more importantly, if you change the database password you don’t have to go and update the env vars on each container, you just change the secret and you are set.

Of course you can use tools like Hashicorp Vault or AWS SecretsManager to read secrets when you app starts, but this is significantly more complicated than reading a file.

Some problems coming from storing secrets in env variables:

  • they are literally variables, they are mutable, anybody can change the value
  • sometimes when when setting them some data gets mangled and people tend to base64 everything to make sure it is all safe
  • it is tricky to deal with formatting, new lines, etc. (if you want to have YAML or JSON for some reason)
  • env variables “leak” to child processes
  • env variables have the tendency to show up in logs

The main problem with reading from a file:
As a lot of docker related this, this is also half baked. Reading from a file while developing locally is easy. The same code can be used in production in Swarm mode, but what if you decide to test your code locally with docker-compose ? Well you are f-ed! Even though that the docs state otherwise, secrets do not work with docker-compose and you are stuck with env variables.
You have already added the secrets file in .dockerignore so there is no way to add it in the container image.
So … either “temporarily” remove it from ‘.dockerignore’ and pray that you won’t forget to add it again (and you won’t push the image to docker registry) or … make you app load secrets from both file and env variables, at which point I hope you are like “wtf dude !”

Docker insists on their solution and do not plan on ever supporting what the people really want.
You can “hack” the system by keep reading from environment and set your Dockerfile’s entry point to be a script that reads from file and set environment variables, but it is a bit ugly, and if you want to switch to another orchestrator you have to modify your Docker files and rebuild images.

Thank you Docker. I learned a lot and I really believe your solution is superior to what we do right now, but for now … sadly I’ll keep using env variables for now.

A comment on the github issue sums this up pretty well:

I've also hit this issue with docker 20.03 using docker service update service-add .
There is no /run/secrets directory in the container.
Did not try the tmpfs workaround.
Seeing this ticket closed without a fix or a solution makes me lose more confidence in docker.

Why I built desktop PC in 2021

The switch happened overnight for me,  at my first job I was working on a desktop PC, but since then every single workspace I have been in ultimately used laptops.

Somehow big ugly boxes went out of favor with the rise of tablets, eeePCs (do you remember these) and MacBook Airs coming out of mail envelopes. 

Maybe if you are working in a big company and you have to go a lot of meetings you really need a portable device, but to be honest in 2013 I was attending stand-ups and such taking notes on my iPad. If you are appalled by the idea of typing by tapping on a screen, I know it is less than ideal, but come on we grew up typing SMS-es on 9 key keyboards and we were pretty good at it. 


I was working on 2017 MacBook Pro with core i5 and 16gb of RAM. I really do not recommend you to buy such a machine. MacBooks were never really that heat efficient, but this is just another beast. The moment you go to Hangouts meeting and you share your screen, you can barely use or even touch the computer. 

At some point I realized that the Core i5 processor I have in my Dell PC from 2012 delivers the same performance as the one in my laptop (for the price of power consumption, but nobody thinks about that in desktops). The only problem was that the old desktop had only 8GB of RAM, but considering the fact that running Linux you do not have to run Docker in VM (constantly reserving 4GB RAM for itself) it turns out it is quite viable alternative.

So I gave it a try. ArchLinux is better than ever. Even with less RAM I was managing to do my daily job. I started using the Macbook as AV machine – just for attending online meetings and doing the coding on the PC (I know, such a waste). I never really had decent PC, so I decided that I finally deserve something more powerful.

Finally, let’s not forget what year is it. Because of the pandemic I haven’t attended a meeting in person for almost a year. The reasons to keep a mobile machine as your primary working device kept disappearing.

Why not 

OK, lets discuss why you would not want PC. 

Linux is an awesome OS, but let’s be honest – sometimes the big corporations overlook software support for it. For example my VPN client was terrible under Linux and it was constantly disconnecting. Of course if it was opensource someone would have fixed the problem a long time ago, but as Cait used to say – we live in an imperfect world. 

Windows ? No. Sorry. Maybe I’ll install Windows some day if I want to play some games, but this is not a priority for me right now. Last time I tried working under Windows the only thing that made this OS bearable for me was WSL, but if that’s the case why not just use Linux instead. 

What about buying iMac or Mac Pro. Both are very expensive machines. Mac Pro for me was completely out of the question because I could not justify the price. 

The iMac is an incredibly beautiful machine, but I am not sure how future proof it is, considering the fact that a lot of people expect drastic changes in that line of products. Also, I was wondering how good the cooling would be and I did not want to risk it. In my head I was picturing horrific scenarios where I have shoveled a huge amount of money just to discover that the CPU is throttled most of the time to prevent overheating.

Unless you go for the iMac option you will inevitably end up with … lets call it – less-than-elegant box next to your desk, so don’t underestimate this. Maybe you insist on beautiful design, maybe you don’t have the space. I personally think I can live with it. Nowadays PC cases are not what they used to be, but lets face it – it is still a box.

Lets build

I was not very picky about the parts, but wanted to be able to achieve something specific (and may be not legal). 
What I ended up with is something that is powerful enough without spending a fortune on the build:

  • AMD Radeon RX 580
  • Gigabyte Z490 Vision G (which is the cheaper version of Vision D, but I think I am fine missing on the extra features)
  • Intel Core i7-10700K (I am not sure if I would ever overclock it, but it’s good to know that this feature is present)
  • Noctua NH-D15 cooler (that thing is HUGE, but amazingly quiet)
  • 500GB Samsung 860 EVO SSD
  • 2x32GB DDR4 Patriot Viper Steel RAM (maybe a bit of a stretch)
  • Cooler Master MWE 750 power supply
  • NZXT H710 Matte Black case 

A lot of things changed since the last time I did this. I don’t remember PC cases to cost as much. Also, I don’t remember cable management to be that good. I was expecting to have an ugly box filled with messy cables. I couldn’t have been more wrong. The case is clean and nice to look at, the cables are hidden in a way that it looks like carefully designed machine, not something assembled in someones garage. The design is simple and clean, there are no RGB lights, I repeat, no RGB lights. Why do most PC case manufacturers assume we are all a bunch of 14-year-olds.

The price is half of what I would have paid for iMac with similar specs. 
Of course the iMac comes with magnificent display, so if you factor the price of nice 4K display – it is not that cheaper. 
Another big advantage of the iMac is that it can run OS X, but guess what, this desktop build is very much hackintoshable. 

The state of Hackintosh

Putting aside the legality and morality of running OS X on non-Apple hardware it is quite interesting topic from a technical point of view.

Last time I tried to something like this, it was 2013 and I barely succeeded in turning my Acer laptop into fully working Hackintosh after a week, but it was absolute mayhem. Hackintoshing computer back then was black magic. Not just for me, but for most people. Most of us (who are not kext developers and do not have intimate knowledge of how OSX works) just tried random stuff until it worked. 
There were some sparse online guides, but at the end of the day you just poke in the dark until you figure out the exact combination of hacked kernel extensions that work in your specific case.

Everything is amazingly different now. OpenCore is bootloader written by people who really know what they are doing and nowadays you can follow generic structured guide and have much more scientific approach to building Hackintosh. 

This hardware is specifically chosen to be close to what Apple sell, so it is relatively easy to install OSX on it.
Actually I found a guy that has done this for similar configuration and just by slightly adapting his configuration, I managed to do my own quite fast. 
Surprisingly it was harder to install all the right drivers for Windows (operating system that is supposed to run on this hardware) compared to OSX (an OS that was never meant to run here).

What still does not work is Wi-Fi and Bluetooth, but this was expected, it was almost never possible and I was expecting to have to buy a separate supported Wi-Fi chip. It is desktop after all, mine is 15 cm away from the router and I prefer cable connection, but my I really want to be able to use my Bluetooth headphones. This is something I haven’t fixed yet, but it is definitely doable, so it is just a matter of time.


I think I have built a pretty decent machine. Most importantly it is capable of running OSX (I am currently on Big Sur), Linux and Windows, so if at some point in the future I decide that I want to become a professional StarCraft player I can dual boot.  

Actually I played StarCraft II and Civilization 6 (under OSX) and they work perfectly. 

Some people say that this is result of the virus, you are probably not going on vacation this year so why not spend some money on toys. 
I am not sure if this is the reason, but nevertheless I think Desktop PC in 2021 is good idea (even if you plan to run Windows and play games on it). 

JQ 101

JQ is an amazing tool that allows you to manipulate JSON via the command line / bash.

To be frank I was a bit weirded out by people praising something so ‘trivial’ back in the day, but then I realized how powerful this thing actually is. Manipulating JSON coming from web background may seem really simple task, but when you go to bash, it is … nightmare.

JQ makes things doable in surprisingly simple way.

The thing it is that it can be a bit hard to start with. Some people have asked me to provide them with several examples so they can have something to build on top of when developing their own stuff.

These are the basics, if you have mastered everything listed here, your logical next step would be to read the official docs like a real dev

So, this is my cheat sheet:

How to use


You can cat a file and pipe it to JQ like:

cat file.json | jq 'something'

but it is generally preferred to do it like this:

jq 'something' < file.json

I personally prefer to add -r flag to JQ so it outputs raw

jq  -r 'something' < file.json

Sample data

We are going to be working with this JSON for most of the examples

    "data": [
            "email": "user1@example.com",
            "username": "user1"
            "email": "user2@example.com",
            "username": "user2"
    "users": [
            "email": "user1@example.com",
            "user.name": "user1"
            "email": "user2@example.com",
            "user.name": "user2"



Simple select everything

jq -r '.' < file.json

will give you the root of the document. Doesn’t matter if it is an object or array

Select property of an object

jq -r '.data' < file.json

returns an array containing all users in ‘.data’

Select first user in data

jq -r '.data[0]' < file.json

returns first object in ‘data’ array

Select all usernames is ‘.data’

jq -r '.data[].username' < file.json



Select all usernames of ‘users’ array when keys contain dots

jq -r '.users[]["user.name"]' < file.json




Getting keys of an object

jq -r '.|keys' < file.json

gives you an array containing all keys ["data", "users"]

Getting keys of object on new lines

jq -r '.|keys[]' < file.json



Select only users called ‘user1’ from ‘data’

jq -r '.data[] | select(.username="user1")'

gives you {"username": "user1", "email": "user1@example.com"}

Select the emails of all users called ‘user1’

jq -r '.data[] | select(.username="user1") | .email'

Create JSON

Create object with key username and value ‘user1’

jq -n --arg mykey username --arg myval user1 '{($mykey):$myval}'

Append property to object

jq -n --arg mykey username --arg myval user1 '{($mykey):$myval}' | jq -r '. |= . + {"email": "user1@example.com"}'

or simpler:

echo '{"username": "user1"}' | jq -r '. |= . + {"email": "user1@example.com"}'

both result in:

  "username": "user1",
  "email": "user1@example.com"

Change value of object property

jq -n '{ "username": "user1"}' | jq -r '.username="user11"'

Create array

jq -n '[]'

same as

echo '[]' | jq -r '.'

Set value of array element

jq -n '[1,2,3]' | jq -r '.[2] |=  5'

outputs [1,2,5]

Add element to array (of unknown length)

jq -n '[1,2,3]' | jq -r '.[.|length] |=  4'

result [1,2,3,4]

PGP for the masses


Using PGP is very simple, you need to generate key (it will actually be two keys – public and private). Public key is what you want to give to other people so they can encrypt files for you. Private key is something you do not want to share.

This guide/cheatsheet assumes that you are using osx, but aside from the GUI tools, the rest will be the same for all operating systems

There are 4 things that you need to know:

  • how to generate your key
  • how to find other people’s keys
  • how to encrypt
  • how to decrypt


Some clarifications:

OpenPGP is a standard for encryption

PGP stands for Pretty Good Privacy – this is software that implements the OpenPGP standard

GPG stands for GNU Privacy Guard – open source implementation of OpenPGP (this is what we are going to use)

PHP is something completely different


Download from https://gpgtools.org/ If you are not planning to use it with Mail App, during installation click Customise and deselect GPG Mail

Create new key

You can just type
gpg --full-generate-key

and answer the questions or via GUI:
Click New, enter your e-mail address and password, wait for key generation to finish. At the end click to upload the key to key server.

If you have already generated key you can upload it by clicking Key -> Send Public Key to Key Server

Note: By uploading your key it will be uploaded to one of the SKS servers used by OpenPGP and it might take some time until it is visible in all servers.

Retrieving other’s public keys

If you have tin foil hat, you would prefer to ask someone to send you their PGP key and they can send it to you. Never trust keys that you have not asked for and you are not sure are sent by the person you think sits behind them, somebody may be pretending to be a friend.

If you however think that you know what you are doing, click Lookup key, type the e-mail of the person you expect to have published, check if the rest of the data looks credible and import it.

If you do not want to use the GUI app, you can search in websites like http://keys.gnupg.net

Note: Some people may have several keys, ask them which one you should use (usually the one that was created most recently would be the one that you want).

Note2: After publishing key it may take some time until it shows up, be patient

Encrypting files

Using Finder

Go to the folder where the file is, right click on it, select Services -> Open PGP : Encrypt File, then select the key that you want to be able to decrypt it and click Encrypt. If the file is called file.txt you will see a new file called file.txt.gpg

Using CLI

gpg -e --armor file.txt

Enter the e-mail of the recipient/s and hit enter when you are done

or if you want to encrypt some text:

echo "text" | gpg -ea

Decrypting files

Using Finder

Go to the folder where the file is, right click on it, select Services -> Open PGP : Encrypt File, enter your password and decrypted file should show up

Using CLI

gpg -d file.txt.gpg

View Keys

gpg -k

Adding PGP key in GitHub

Go to github.com -> Settings -> SSH and GPG keys -> New GPG Key

Copy your public key

gpg --export --armor MYKEYID

Paste it in github and click Save.

Then you have to tell git to sign your commits

git config --global commit.gpgsign true
git config --global user.signingkey MYKEYID

If you have created your github account with different e-mail address than the one specified in the PGP file, you can go to Settings -> Emails and add new e-mail address. Then proceed with the verification clicking the link sent to your inbox.

If you still have problems, check the e-mail specified in your ~/.gitconfig file. It should match the one from the PGP file

Deep(ish) dive

View secret keys with their long IDs

gpg --list-secret-keys --keyid-format LONG

Backup and restore secret keys


gpg --export-secret-keys MYKEYID > my-private-key.asc

Import from backup:

gpg --import my-private-key.asc


If you private key leaks or maybe you do not want to use it anymore you have to send the Key Server revocation certificate. Ideally you would still have the private key and you could generate it at any moment, but sometimes keys get lost, so you may want to generate revocation certificate now and back it up somewhere.

gpg --output revocation.crt --gen-revoke myemail@example.com

When you decide to revoke the key you have to import it and send it to the Key Server like this:

gpg --import revocation.crt
gpg --send-keys KEYID

Replace KEYID with the ID of your key

Signing other people’s keys

TLDR: You can sign keys to tell others that this key is good.

If you trust the person you can sign their key and send them back the key and you will be listed as someone who signed the key.

gpg --sign-key email@example.com

You have to send them the signed key back

gpg --output /tmp/signed.key --export --armor email@example.com

and they have to import it to benefit from you signing it:

gpg --import signed.key

If people you trust have signed the key, usually it is a good indicator.

How to add second WAN port to Linksys EA2700 with dd-wrt

First of all terrible router, don’t buy it, this is only if you already have it and you need to squeeze the last drops of life from it.

Why would you want second WAN port ? In my case, it was because the one that I already had stopped working, so I was wondering if there is a way to use one of the other 4 as WAN. There is.


There is a already very good article about DD-WRT on Linksys EA27600  by Sacha Häberling, I am not going to repeat everything, I’ll just add some details.

  1. First you need to get rid of the ridiculous modern UI that Linksys ship with the router and switch back to Linksys Classic UI. I made the mistake to assume that since I already have it, I may skip this step – no, do it even if you already reverted to the old firmware.
  2. Do 30-30-30- reset (30 seconds holding reset with power on, unplug the power cable for 30 seconds, plug again and hold for another 30 seconds. Reset button is pressed the whole time)
  3. Download the latest firmware you think might work  from DD-WRT FTP server. I haven’t had time to experiment with a lot of them to see if the latest works reliably. The original article recommends r30471 , the latest someone confirmed still works good is r30805, so I installed this one. If you have time – check the latest one. Install it from the firmware update menu in the web interface.
  4. Log in the new interface, go to DD-WRT administration panel > commands and execute
  5. Do another 30-30-30 reset and this should be it. Then configure your router as usual.


So far so good, now I want to make port 4 as it is labeled on the box (actually port 3) to be another WAN port. If you need to know more about what is going on, I strongly advise you to check dd-wrt’s explanation about how ports and vlans work.

Long story short, you have 5 LAN ports, they are usually split in 2 virtual LAN groups, one for WAN (or “the Internet”) and the other is for your local LAN network. The fact that we can move physical LAN port to another virtual LAN make this whole thing possible.

For EA2700 there are 3 virtual groups, I am no really sure, but I suspect this is because of the dual band or guest network or something.

Login via telnet or ssh in the router and execute:

What this means is:
– 5 and 8 are so called processor ports, so we don’t care about them for now.
– the asterisk after 5 and 8 in the first two vlans indicates these are the local virtual LANs, meaning that they are supposed to receive internet from the outside.
– port 4 is the WAN port (we count from 0)
Guides recommend to save your initial configuration somewhere in case you need to restore it.

Lets move port 3 (4 on the box) out of vlan 0 and 1 and move it to 2.

Then we can check the port/vlans:

The first number after portXvlans indicates the vlan for the port, the rest are attributes, 16 means “tagged” and there is some kind of convention to always tag the CPU port.
Now we want to make port 3 another WAN port:

and finish by committing our changes and restart

Plug the cable from your ISP in the fourth port and you should have still working old router even if it tries to get rid of you.

My VIM Setup

This will be my first post in English, I give up – after so many year,me  having doubts about my English is going to get any better so I hope you understand me at least.

Why ?

I am using VIM as my main editor for php and javascript for more than a year already.

What I learned is that I really like this minimalist approach – building your own “IDE” step by step, including only the things you need. If you prefer full IDE – please go with Intellij (the best I have used in this class), but if you are going to use Sublime Text or VSCode or Atom or whatever, there is no reason not to use VIM instead.

Then once I got used to all the keybindings and I adjusted them a bit to fit my workflow I got hooked. Now I can barely work with “normal” text editor. I spent some time to learn touchtyping and now I am getting really irritated when my fingers have to move out of the home row of the keyboard. VIM helps a ton with this. I even added a plugin for Firefox called vimium so I can control my browser with vim key bindings (its not ideal, but its something)

Train yourself

You really need to teach yourself out of the habit of using arrow keys. In my experience this wont happen until you just disable the arrow keys. Just put this in your .vimrc


Plugin manager

First we are going to start with the plugin manager. I prefer using Vundle, one you set it up it is really easy to work with. You have other options like Pathogen or vim-plug. It doesn’t matter so much which one you choose, just pick one.


I am going to share with you just the essentials.

vim-javascript adds better syntax highlighting, but if you are writing code for nodejs

vim-node will give you better support for “gf” (go to file) and “gd” (go to definition), etc.

php-vim will get you started with PHP

YouCompleteMe will give you great autocompletion.

NERDTree will give you the file browsing capabilities that you need. At first I was using it in separate window left of window where I edit code in more classic IDE fashion, but soon I realized that I don’t really need to stay there all the time. Now I use it only when I want to open new file. Pay attention that this plugin is way more powerful than it initially seems. You can create new files/folders with “ma” or delete them with “md”. Just open nerdtree by either typing “NERDTree” or just “:e .” and hit “?” to see the full list of options.

BufExplorer gives you a list of all open buffers so you can easily navigate between them. I have mapped it to <leader>w so I can easily see everything when I have a lot of files opened.

WinTabs is an excellent plugin that will represent all the open files as tabs that you can easily switch between. Your right hand is sitting on keys J – ; and your left on A-F. VIM already usese HJKL for navigation, so it came natural to me to map :WintabsNext to <leader>f and :WintabsPrevious to <leader>a.

ALE or Asynchronous Lint Engine is amazing plugin that lets you integrate linter like eslint or php-cs-fixer with VIM. You get all the best features from the linters that you choose plus automatically fixing your code when you have. No more unformatted code, no more mixed tabs and spaces, no need to come up with excuses why you have used double instead of single quotes.

ack.vim is a plugin that lets you search for text in your project’s files. I personally chose to configure it to work with the_silver_searcher which is much faster. Just type something like Ack @todo and you will get a list of all files that contain this string.

Ctags is the way to navigate inside your code and there are several options that you can choose from, but I really like vim-gutentags. It kind of just works. Just install ctags then the plugin and you are done. I find this one most useful for PHP.

phpactor is where it gets serious. It gives you very nice features like automatically adding “use Whatever/Class/You/Need”. You can see where is your class used, it adds code navigation, jump to definition, refatoring functions, etc. I basically can’t write PHP without this plugin.


Bonus Tips

One of the best things that I have done is remapping my <leader> key to space

The default leader key is “\”, but this just adds too much stress to my right pinky finger and after a while it just hurts. Thumbs are much tougher. Moving to my tab to the right is <space>f, to the left <space>a. It made everything so much easier for me.


I hope I was being helpful. Explore and experiment. I will try to add my dotfiles soon.

Запазване на пароли под Linux, Windows и Mac

Много харесвам Keychain-a на Mac, но той си има своите ограничения – измежду които основното, че не можеш да го ползваш на други операционни системи.

Оказа се, че може би единствения мултиплатформен начин да направите това с KeePass. Има много начини и комбинации от софтуер, с които да имплементирате нещо такова, но ето какво аз съм намерил за най-доброто.

Под Debian KeePass2 присъства сред пакетите, така че можем да го инсталираме лесно

Следва да сложим plugin на Firefox, аз лично предпочитам PassIFox. Този плъгин изисква KeePassHttp за да работи, а на плъгина му трябва mono-complete.

След като инсталирате mono-complete, изтегляте KeePassHttp.plgx , слагате го  в /usr/lib/keepass2 и да рестартирате keepass.

На този етап паролите ви от Firefox се пазят в KeePass. Черешката на тортата е да сложите kdbx файла в OwnCloud за да имате синхронизация между всичите си устройства.


За Max Os въпреки че не е много стабилен препоръчвам  MacPass.

За Android също има owncloud и можете да ползвате KeePassDroid.

Дори iOS феновете ще останат приятно изненадани от комбинацията ownCloud + KeePass Touch.  За Windows мисля няма нужда да обяснявам.


Като цяло имате password management за поне 5 операционни системи, напълно безплатен и чрез ownCloud синхронизиран онлайн, най-близкото което можете да получите до Mac Keychain с бонуса, че работи на (почти) всички устройства, които вероятно притежавате.



Edit: November 2018

За Firefox Quantum PassIFox не работи. Вместо това можете да ползвате Kee заедно с KeePassRPC

Сертификат за Apple Push Notification Service

За да пращате push notifications през APNs можете да използвате библиотека от типа на ZendService\AppleApns

Документацията им е обяснява как се ползва библиотеката, но не и как да си генерираме необходимия сертификат. Процедура, която може да се окаже малко по-сложна от колкото изглежда.

Като за начало, влизаме в developer.apple.com и в секцията Identifiers, кликаме на App ID. Ако нямате създадено App ID за вашето приложение си правите и после кликате на името му в списъка, след това избирате бутона Edit.

Screen Shot 2016-03-09 at 6.28.16 PM

Сколвате надолу докато не намерите Push Notifications. От там избирате Create Certificate. Следвате инструкциите на екрана, отваряте Keychain-a и т.н.

Както казва най-добрия iOS dev на юг от стара планина – Милен, тук най-важното е в полето Common Name да напишете нещо, което има някакъв смисъл, примерно MyAppName Push Production или нещо от този тип.

След като генерирате сертификата си го изтегляте и import-вате в keychain-a.

В полето за търсете пишете push, трябва да ви излязат 3 резултата – 1 сертификат, един частен и един публичен ключ.

Screen Shot 2016-03-09 at 5.42.04 PM

Важния момент е тук – избирате сертификата и частния ключ и кликате export и на двата.

Screen Shot 2016-03-09 at 5.40.02 PM

Експортирате с разширение p12. Отваряте един терминал и пишете :


Взимате тоя pem файл и го засилвате към ZendService\AppleApns (примерно) и всичко би трябвало да работи.