April 17, 2015

SSSHHH! Testing Python SSH client with auth via private key

This post is about unit testing SSH clients written in Python. The key point is usage of public and private RSA keys for authentication. There are some docs, articles and examples on the web, but most of them are using username/password for doing that. You may dig around paramiko's test cases and examples, you may find 'MockSSH' interesting, you may have already heard aboud Twisted's 'conch' and 'cred' subsystems. But if you are going to find answers for some real-life questions in the docs then you are welcome to the wonderland! Yay! And if you decide to look into the sources of that stuff then you are on a highway to hell for sure. So, I hope this post may somehow help someone someday.

First of all, let's talk about dependencies. We are going to test simple SSH client and 'paramiko' library will help us to implement a lite one. Next, we will need to implement a lightweight SSH server which can be run on demand and which our client can connect to. To do this we'll use 'Twisted' framework. Finally, we will need to run our tests. I'm quite lazy and I'm going to run them via 'nose' tool. So, here's our final list of dependencies:

Looks good. Let's install them:

Now we need to create public and private RSA keys for testing purpose. 'ssh-keygen' is our choice for this:

In this example keys are located in current directory and their names are 'user.key' and 'user.key.pub'. Private key has no password to keep example simple. If you are too lazy to create keys by our own, you can download mine:

Now we are ready to strike. What should we do? We need to test our keys. Let's create our SSH server for this. Oops. I've already done it. Nice of me, innit? You are welcome to download it:

You do not need to look inside 'mock_server.py' at this moment. Now you can run it. Give it a try:

By default it will run at 'localhost:2222', it will know about existence of user 'user', and it will look for corresponding keys in current directory. Pass '--help' argument to it to see how you can change that values.

Now it's time to connect to our server. Firstly, let's make our private key visible for ssh-agent:

We are ready to connect, so let's do this:

We can see a kind greeting for the server. Let's try to talk with him:

Okay, well, now you have a machine gun. HO-HO-HO. Our server can process some commands. Now we are sure we have some endpoint to connect to and we use private key for authentication.

If you are still reading this, you may ask: "Hey, so what about promised unit testing? How can daemon help us?" and you'll be right. Actually, we do not need to run mock daemon from console for unit testing. We'll launch mock SSH server during tests in a separate thread, but it still may be useful for you to run console daemon for some reason. Who knows?

So, let's start to write our unit tests by defining imports and so on:

In addition to console daemon, 'mock_server' module provides ability to start and stop server in a thread by demand. We are going to start and stop server before and after execution of test case respectively:

'start_threaded_server' method accepts absolutely the same parameters as console daemon: interface, port, username and path to directory with keys.

We are ready to create and delete SSH client for each separate test:

Note, that we use private key to connect to server. We will use 'exec_command' method of paramiko's client to run commands on server:

Lazy people can download this module ;)

And then execute tests:

This point may be considered as the end of this article. Congratulazioni!

There are still few words I need to say. First of all, handlers of commands which are sent from console and handlers of commands which are sent via paramiko's 'exec_command' are quite different things. Former are invoked within 'SSHMockProtocol.lineReceived' method and they are easy to use and understand:

Latter are invoked within 'SSHMockAvatar.execCommand' method:

This part is not so trivial and it demands some special magic to be done. Note, that 'protocol' argument is an instance of 'twisted.conch.ssh.session.SSHSessionProcessProtocol' which in general is used to run some subprocesses via Twisted's reactor.

Please, tell in comments, if there is any better or elegant approach.

It may be a nice homework task for you to update example so that you can use private key with password also. Another good thing to think about is multiuser support.

Good luck and have a nice day!

September 30, 2014

Nginx: pass request to location B if location A fails with 404

If you need to make Nginx to pass request to location B if location A fails with 404 error, then I've got a solution for you.

It turns out to be a rather simple task. All you need is:

  • Define location A.
  • Define location B.
  • Point A's error_page to B.
  • Allow interception of errors from A.

Here is an example:

Here we have a uWSGI worker listening requests on 9001 port. In my case it's a Django application running under Vagrant. If a requested page will not be found, request will be passed to some another application listening on port 5000.

If you need to deal with something different from uWSGI, you might change names from "uwsgi_*" to "proxy_*", just like in case of 'fallback' location.


You can handle errors with other status codes too.


Feel free to dig around full example of Nginx config. Maybe it will become useful too. Good luck!

September 28, 2014

HTML vertical tabs (w/jQuery)

I think, I've created really nice implementation of vertical tabs in HTML. I really like it because I'm not a frontend developer, but it works and feels simple \(•◡•)/.

Other reasons? Well, let's see:

  • Depends on nothing but jQuey.
  • Has really small and clean implementation.
  • Follows DRY principle: you define your tabs only in one place and you don't need to have that mess with IDs. It's especially useful when you render some template and need to include/create some tabs on demand.
  • Automatic creation of tabs menu.
  • Automatic menu & content size.
  • Doesn't corrupt your URL with '#' anchors.
  • Easy to adopt.

Bonus: looks great with HTML KickStart.

Where is it? Try out: http://jsfiddle.net/oblalex/xrvf827f/2/. Enjoy!

September 25, 2014

Google Drive API: upload files to a folder using "Service Account"

The case

You have a server application which needs to upload some files to a specific folder in Google Drive which is owned by someone (e.g., by you).

Preface

I have spent lot of time, trying to figure out how to do it. It was a really big quest, besause:

  1. Google has tons of docs on their API.
  2. Those docs are really useless: lots of words without specifics.
  3. Docs have many cross-references which might point to nowhere or to some place which is out of date.
  4. It's hard to get links to some important places, e.g. to the list of available API scopes.
  5. Auth for non-humans is poorly documented and can be a pain.
  6. Your "Google account" and your "Service account" are quite different things, so they might seem to have different file storages, hence, this requires some work with permissions.

For example, docs for GitHub API are not big, but they clear and loud.

The real part

So, to start you will need to:

  1. Sign in to your Google account.
  2. Go to Google API Console.
  3. Create new project.
  4. Click to "APIs & auth" of the left side panel.
  5. Click "APIs", search for "Drive API" and enable it.
  6. ClicK "Credentials" and "Create new Client ID".
  7. Select "Service account" and create new Client ID.
  8. You will be automatically prompted to download your private key in 'PKCS12' format.
  9. Password for accessing it will be shown in pop-up. You may never need it, but it's better to put it to some secret place.
  10. Download that key and keep it private on your system.
  11. After that you will see "Client ID" and "Email address" for your application.
  12. Go to you Google Drive. Create some folder, and open "Sharing settings" for it.
  13. Add your service email to the list of allowed users and allow it to edit the contents of that folder.
  14. Remember folder ID somewhere.

NOTE: Sharing your folder with service account is just a special case. I'm not sure if it's fully secure to use service email, as it contains a significant part of Client ID. I think, it's ok, if you and only you can see the list of permissions.

You can store your files in the storage of your service account. You will need to read API docs for sharing and changing permissions to make files accessible even by you (I gave up at this point).


Be attentive while copying service email. Don't grab some extra whitespaces or new lines.


Enter the code

Now, you can write some code. Let's start from imports:

Here you can see 2 non-standard packages are used: 'apiclient' and 'oauth2client'. They are a part of 'google-api-python-client'. You can install it by running:

However, to make 'SignedJwtAssertionCredentials' work you will need to use PyOpenSSL, or PyCrypto 2.6 or later. You can choose any of them, but read notes below.

PyOpenSSL is might be already present in your system. If you need to use it inside your virtualenv (I hope, you need), then create a link:

You may use 'PyCrypto' directly, but it does not work with 'PKCS12' format. So, you will need to convert your private key to something understandable for 'PyCrypto':

This will convert 'PKCS12' to 'PEM', but that's not all: you will need to manually strip "Bag Attributes" and "Key Attributes" from it.

Let's define some constants for our example:

This is just an example. Never ever hardcode your API keys and secrets. Load them from environment, or from JSON file with secrets or wheresoever.

To use API for Google services you will need to get a 'service' object. This is done in same manner for all services:

File uploading consists of defining body, media body and invoking 'insert' method:

Let's try this out and upload some text file:

This will print an URL which can be used by humans for sharing. You may use 'pprint' to see all response:

Upset with having no full example? Don't worry, of course I'll share it with you: see full example.

September 18, 2014

Blast into PyEnv

I've discovered two nice things recently: pyenv and pyenv-virtualenv. Here I'd like to tell how to install and start to use them in a couple of minutes.

0
Prepare to take-off.

sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libncurses-dev libsqlite3-dev wget curl llvm git


1
Go to your home dir and clone pyenv and pyenv-virtualenv (you may clone them to whatever you like, just pay attention for the PYENV_ROOT in the next step).

cd
git clone git://github.com/yyuu/pyenv.git .pyenv
git clone https://github.com/yyuu/pyenv-virtualenv.git ~/.pyenv/plugins/pyenv-virtualenv


2
Update your env.

echo '' >> ~/.bash_profile
echo '### PyEnv' >> ~/.bash_profile
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bash_profile
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bash_profile
echo 'eval "$(pyenv init -)"' >> ~/.bash_profile
echo 'eval "$(pyenv virtualenv-init -)"' >> ~/.bash_profile


Use .zshrc instead of .bash_profile for ZSH.


3
Apply changes.

exec $SHELL


4
Install some versions of Python. They will be compiled, so take a look at common build problems if you encounter any.

pyenv install 2.6.9
pyenv install 2.7.8
pyenv install 3.4.1
pyenv rehash


5
Voila! You've got 4 different Pythons!

pyenv versions
* system (set by /home/alex/.pyenv/version)
  2.6.9
  2.7.8
  3.4.1


6
Let's play with virtualenv now and create one for Python 2.6.9.

mkdir sandbox && cd sandbox
pyenv virtualenv 2.6.9 sandbox-2.6.9
pyenv activate sandbox-2.6.9


7
Ensure it's OK.

pyenv versions              
  system
  2.6.9
  2.7.8
  3.4.1
* sandbox-2.6.9 (set by PYENV_VERSION environment variable)


8
Let's pull something from PyPI.

pip install bpython
which bpython 
/home/alex/.pyenv/versions/sandbox-2.6.9/bin/bpython


9
Launch the rocket.

bpython
>>> import sys
>>> print sys.version
2.6.9 (unknown, Sep 18 2014, 14:57:31) 
[GCC 4.8.2]


10
Yay! Enjoy environment virtualization!

June 25, 2014

Update names of default dirictories in Gnome

I like to change names of default directories in my user's "Home". E.g., "Documents" to "docs", "Downloads" to "dw", "Pictures" to "pic" and so on.

To make such changes visible for Gnome, you need to update Gnome's config:

$ xdg-user-dirs-update

To update particular directory, e.g. "Desktop", run:

$ xdg-user-dirs-update --set DESKTOP ~/desk

Please, refer to xdg-user-dirs-update man page for more info.

February 04, 2014

Setting really default browser

When Chrome or Chromium spontaneously becomes your default browser and all normal ways in GUI say that it isn't, then run:

sudo update-alternatives --config x-www-browser

and select whatever you want, e.g.:

There are 2 choices for the alternative x-www-browser (providing /usr/bin/x-www-browser).

  Selection    Path                       Priority   Status
------------------------------------------------------------
* 0            /usr/bin/chromium-browser   40        auto mode
  1            /usr/bin/chromium-browser   40        manual mode
  2            /usr/bin/firefox            40        manual mode

So many ways to do one thing... why?