Sometimes you’re super eager to get started with a new project! Seems easy – you set up a git repo, use
django-admin startproject to generate an empty project, start adding dependencies… You want to use Django REST framework for the API, so you install that. Then you need to connect Postgres for the database… Oh, but you want to deploy to Heroku, so you need to configure that DATABASE_URL environment variable and hook that up in settings.py – there was that one project where we had that working already, we can copy/paste it for sure… Celery for async tasks… Redis… 😴… Wait, what were we building again?
If the scenario described above sounds familiar, then like me you’ve run into the problem of boring, repetitive and uncreative work necessary to set up a modern web app project. A lot of boilerplate is needed to get a basic project working and best practices keep changing. Sure, there are solutions like cookiecutter-django which is quite nice, but with ~40 requirements spread across 3 files and ~500 lines of settings spread across 4 files it might be overkill when you’re just getting started. More importantly, while cookiecutter is great for initial project generation, it doesn’t allow you to easily update a project afterwards. Regenerating the project, even when exactly the same prompt answers are selected results in “Error: “my_awesome_project” directory already exists”. As already stated, best practices change over time and it would be useful to have an easy way to update your project boilerplate occasionally.
Continue reading Deploy a Django REST API to Heroku in 5 minutes
We are all a bit lazy in this post-holiday period, so what better project to work on during these relaxed evenings at home, but on a home automation system. Having Docker containers on a physical device that has access to all other IoT devices in our network with exposed APIs like TVs, speakers or maybe even droids and being able to iteratively upgrade these containers gives us ample opportunity to play.
I love the elegance of resin.io’s Docker container deployment & upgrade method, so I use it a lot for hobby projects & freelance work. In this tutorial, I’ll show you how to create a Python Flask app with periodic Celery tasks for controlling your TV via the Chromecast API. All of the source code can be found in this repo. So, go get a hot cup of tea, clone the repo and let’s get started…
Source: Home automation using Python, Flask & Celery
As I went searching for an RStudio equivalent for Python I discovered IPython notebook, which I shortly described in this Stack Overflow answer:
IPython has a really cool sub-project called IPython notebook. It basically allows you to interactively code and document what you’re doing in one interface and later on export it as a notebook or script or print it as static html (and therefore pdf as well).
It starts a web application locally and you use it from your browser.
There’s also a Qt console for IPython, a similar project with inline plots, which is a desktop application.
As shown in this video, this is how you try out the helloworld speech recognition using Sphinx from Python in Ubuntu…
$ sudo apt-get install python-pocketsphinx pocketsphinx-hmm-wsj1 pocketsphinx-lm-wsj
And the code (a script called speech_recognition.py, you can download it as a gist) goes as follows (it may be necessary to change paths to your language model and hidden markov model files):
Continue reading Speech recognition helloworld in Python
Instructions on how to set up an Ubuntu cluster can be found at https://help.ubuntu.com/community/MpichCluster .
I’ve updated a few outdated commands there myself so it shouldn’t be too hard to follow the instructions. The only thing I personally did differently was that I didn’t create a new user, but instead used my old account on all the machines (the important thing is that the username be the same everywhere).
In this post I’ll explain how to make a Python script to utilize this cluster using the MPI standard for parallel programming.
To prepare your Python interpreter for parallel programming, you first need some sort of an MPI interface. Several exist so it’s up to you to choose. I used mpi4py. This is part of the scipy module and it can be installed through Synaptic or with:
sudo apt-get install python-dev # other potential packages to consider - python-mpi mpichpython python-scipy python-numpy
Continue reading Ubuntu cluster setup for MPI parallel programming in Python