Chris Ball: Announcing GitTorrent: A Decentralized GitHub

(This post is an aspirational transcript of the talk I gave to the Data Terra Nemo conference in May 2015. If you’d like to watch the less eloquent version of the same talk that I actually gave, the video should be available soon!)

I’ve been working on building a decentralized GitHub, and I’d like to talk about what this means and why it matters — and more importantly, show you how it can be done and real GitTorrent code I’ve implemented so far.

Why a decentralized GitHub?

First, the practical reasons: GitHub might become untrustworthy, get hacked — or get DDOS’d by China, as happened while I was working on this project! I know GitHub seems to be doing many things right at the moment, but there often comes a point at which companies that have raised $100M in Venture Capital funding start making decisions that their users would strongly prefer them not to.

There are philosophical reasons, too: GitHub is closed source, so we can’t make it better ourselves. Mako Hill has an essay called Free Software Needs Free Tools, which describes the problems with depending on proprietary software to produce free software, and I think he’s right. To look at it another way: the experience of our collaboration around open source projects is currently being defined by the unmodifiable tools that GitHub has decided that we should use.

So that’s the practical and philosophical, and I guess I’ll call the third reason the “ironical”. It is a massive irony to move from many servers running the CVS and Subversion protocols, to a single centralized server speaking the decentralized Git protocol. Google Code announced its shutdown a few months ago, and their rationale was explicitly along the lines of “everyone’s using GitHub anyway, so we don’t need to exist anymore”. We’re quickly heading towards a single central service for all of the world’s source code.

So, especially at this conference, I expect you’ll agree with me that this level of centralization is unwise.

Isn’t Git already decentralized?

You might be thinking that while GitHub is centralized, the Git protocol is decentralized — when you clone a repository, your copy is as good as anyone else’s. Isn’t that enough?

I don’t think so, and to explain why I’d like you to imagine someone arguing that we can do without BitTorrent because we have FTP. We would not advocate replacing BitTorrent with FTP, and the suggestion doesn’t even make sense! First — there’s no index of which hosts have which files in FTP, so we wouldn’t know where to look for anything. And second — even if we knew who owned copies of the file we wanted, those computers aren’t going to be running an anonymous FTP server.

Just like Git, FTP doesn’t turn clients into servers in the way that a peer-to-peer protocol does. So that’s why Git isn’t already the decentralized GitHub — you don’t know where anything’s stored, and even if you did, those machines aren’t running Git servers that you’re allowed to talk to. I think we can fix that.

Let’s GitTorrent a repo!

Let’s jump in with a demo of GitTorrent – that is, cloning a Git repository that’s hosted on BitTorrent:

1  λ git clone gittorrent://
2  Cloning into 'recursers'...
4  Okay, we want to get: 5fbfea8de70ddc686dafdd24b690893f98eb9475
6  Adding swarm peer:
8  Downloading git pack with infohash: 9d98510a9fee5d3f603e08dcb565f0675bd4b6a2
10 Receiving objects: 100% (47/47), 11.47 KiB | 0 bytes/s, done.
11 Resolving deltas: 100% (10/10), done.
12 Checking connectivity... done.

Hey everyone: we just cloned a git repository over BitTorrent! So, let’s go through this line by line.

Lines 1-2: Git actually has an extensible mechanism for network protocols built in. The way it works is that my git clone line gets turned into “run the git-remote-gittorrent command and give it the URL as an argument”. So we can do whatever we want to perform the actual download, and we’re responsible for writing git objects into the new directory and telling Git when we’re done, and we didn’t have to modify Git at all to make this work.

So git-remote-gittorrent takes it from here. First we connect to GitHub to find out what the latest revision for this repository is, so that we know what we want to get. GitHub tells us it’s 5fbfea8de...

Lines 4-6: Then we go out to the GitTorrent network, which is a distributed hash table just like BitTorrent’s, and ask if anyone has a copy of commit 5fbdea8de... Someone said yes! We make a BitTorrent connection to them. The way that BitTorrent’s distributed hash table works is that there’s a single operation, get_nodes(hash) which tells you who can send you content that you want, like this:

get_nodes('5fbfea8de70ddc686dafdd24b690893f98eb9475') =
  [, ...]

Now, in standard BitTorrent with “trackerless torrents”, you ask for the files that you want by their content, and you’d get them and be happy. But a repository the size of the Linux kernel has four million commits, so just receiving the one commit 5fbdea8de.. wouldn’t be helpful; we’d have to make another four million requests for all the other commits too. Nor do we want to get every commit in the repository every time we ‘git pull’. So we have to do something else.

Lines 8-12: Git has solved this problem — it has this “smart protocol format” for negotiating an exchange of git objects. We can think of it this way:

Imagine that your repository has 20 commits, 1-20. And the 15th commit is bbbb and the most recent 20th commit is aaaa. The Git protocol negotiation would look like this:

1> have aaaa
2> want aaaa
2> have bbbb

Because of the way the git graph works, node 1> here can look up where bbbb is on the graph, see that you’re only asking for five commits, and create you a “packfile” with just those objects. Just by a three-step communication.

That’s what we’re doing here with GitTorrent. We ask for the commit we want and connect to a node with BitTorrent, but once connected we conduct this Smart Protocol negotiation in an overlay connection on top of the BitTorrent wire protocol, in what’s called a BitTorrent Extension. Then the remote node makes us a packfile and tells us the hash of that packfile, and then we start downloading that packfile from it and any other nodes who are seeding it using Standard BitTorrent. We can authenticate the packfile we receive, because after we uncompress it we know which Git commit our graph is supposed to end up at; if we don’t end up there, the other node lied to us, and we should try talking to someone else instead.

So that’s what just happened in this terminal. We got a packfile made for us with this hash — and it’s one that includes every object because this is a fresh clone — we downloaded and unpacked it, and now we have a local git repository.

This was a git clone where everything up to the actual downloading of git objects happened as it would in the normal GitHub way. If GitHub decided tomorrow that it’s sick of being in the disks and bandwidth business, it could encourage its users to run this version of GitTorrent, and it would be like having a peer to peer “content delivery network” for GitHub, falling back to using GitHub’s servers in the case where the commits you want aren’t already present in the CDN.

Was that actually decentralized?

That’s some progress, but you’ll have noticed that the very first thing we did was talk to GitHub to find out which hash we were ultimately aiming for. If we’re really trying to decentralize GitHub, we’ll need to do much better than that, which means we need some way for the owner of a repository to let us know what the hash of the latest version of that repository is. In short, we now have a global database of git objects that we can download, but now we need to know what objects we want — we need to emulate the part of github where you go to /user/repo, and you know that you’re receiving the very latest version of that user’s repo.

So, let’s do better. When all you have is a hammer, everything looks like a nail, and my hammer is this distributed hash table we just built to keep track of which nodes have which commits. Very recently, substack noticed that there’s a BitTorrent extension for making each node be partly responsible for maintaining a network-wide key-value store, and he coded it up. It adds two more operations to the DHT, get() and put(), and put() gives you 1000 bytes per key to place a message into the network that can be looked up later, with your answer repeated by other nodes after you’ve left the network. There are two types of key — the first is immutable keys, which work as you might expect, you just take the hash of the data you want to store, and your data is stored with that hash as the key.

The second type of key is a mutable key, and in this case the key you look up is the hash of a public key to a crypto keypair, and the owner of that keypair can publish signed updates as values under that key. Updates come with a sequence number, so anytime a client sees an update for a mutable key, it checks if the update has a newer sequence number than the value it’s currently recorded, and it checks if the update is signed by the public key corresponding to the hash table key, which proves that the update came from the key’s owner. If both of those things are true then it’ll update to this newer value and start redistributing it. This has many possible uses, but my use for it is as the place to store what your repositories are called and what their latest revision is. So you’d make a local Git commit, push it to the network, and push an update to your personal mutable key that reflects that there’s a new latest commit. Here’s a code description of the new operations:

// Immutable key put
hash(value) = put({
  value: 'some data'

// Mutable key put
hash(key) = put({
  value: 'some data',
  key: key,
  seq: n

// Get
value = get(hash)

So now if I want to tell someone to clone my GitHub repo on GitTorrent, I don’t give them the URL, instead I give them this long hex number that is the hash of my public key, which is used as a mutable key on the distributed hash table.

Here’s a demo of that:

λ git clone gittorrent://81e24205d4bac8496d3e13282c90ead5045f09ea/recursers

Cloning into 'recursers'...

Mutable key 81e24205d4bac8496d3e13282c90ead5045f09ea returned:
name:         Chris Ball
    master: 5fbfea8de70ddc686dafdd24b690893f98eb9475

Okay, we want to get: 5fbfea8de70ddc686dafdd24b690893f98eb9475

Adding swarm peer:

Downloading git pack with infohash: 9d98510a9fee5d3f603e08dcb565f0675bd4b6a2

Receiving objects: 100% (47/47), 11.47 KiB | 0 bytes/s, done.
Resolving deltas: 100% (10/10), done.
Checking connectivity... done.

In this demo we again cloned a Git repository over BitTorrent, but we didn’t need to talk to GitHub at all, because we found out what commit we were aiming for by asking our distributed hash table instead. Now we’ve got true decentralization for our Git downloads!

There’s one final dissatisfaction here, which is that long strings of hex digits do not make convenient usernames. We’ve actually reached the limits of what we can achieve with our trusty distributed hash table, because usernames are rivalrous, meaning that two different people could submit updates claiming ownership of the same username, and we wouldn’t have any way to resolve their argument. We need a method of “distributed consensus” to give out usernames and know who their owners are. The method I find most promising is actually Bitcoin’s blockchain — the shared consensus that makes this cryptocurrency possible.

The deal is that there’s a certain type of Bitcoin transaction, called an OP_RETURN transaction, that instead of transferring money from one wallet to another, leaves a comment as your transaction that gets embedded in the blockchain forever. Until recently you were limited to 40 bytes of comment per transaction, and it’s been raised to 80 bytes per transaction as of Bitcoin Core 0.11. Making any Bitcoin transaction on the blockchain I believe currently costs around $0.08 USD, so you pay your 8 cents to the miners and the network in compensation for polluting the blockchain with your 80 bytes of data.

If we can leave comments on the blockchain, then we can leave a comment saying “Hey, I’d like the username Chris, and the hash of my public key is <x>“, and if multiple people ask for the same username, this time we’ll all agree on which public key asked for it first, because blockchains are an append-only data structure where everyone can see the full history. That’s the real beauty of Bitcoin — this currency stuff is frankly kind of uninteresting to me, but they figured out how to solve distributed consensus in a robust way. So the comment in the transaction might be:



It’s interesting, though — maybe that “gittorrent” at the beginning doesn’t have to be there at all. Maybe this could be a way to register one username for every site that’s interested in decentralized user accounts with Bitcoin, and then you’d already own that username on all of them. This could be a separate module, a separate software project, that you drop in to your decentralized app to get user accounts that Just Work, in Python or Node or Go or whatever you’re writing software in. Maybe the app would monitor the blockchain and write to a database table, and then there’d be a plugin for web and network service frameworks that knows how to understand the contents of that table.

It surprised me that nothing like this seems to exist already in the decentralization community. I’d be happy to work on a project like this and make GitTorrent sit on top of it, so please let me know if you’re interested in helping with that.

By the way, username registration becomes a little more complicated than I just said, because the miners could see your message, and decide to modify it before adding it to the blockchain, as a registration of your username to them instead of you. This is the equivalent of going to a domain name registrar and typing the domain you want in their search box to see if it’s available — and at that moment of your search the registrar could turn around and register it for themselves, and then tell you to pay them a thousand bucks to give it to you. It’s no good.

If you care about avoiding this, Bitcoin has a way around it, and it works by making registration a two-step process. Your first message would be asking to reserve a username by supplying just the hash of that username. The miners don’t know from the hash what the username is so they can’t beat you to registering it, and once you see that your reservation’s been included in the blockchain and that no-one else got a reservation in first, you can send on a second comment that says “okay, now I want to use my reservation token, and here’s the plain text of that username that I reserved”. Then it’s yours.

(I didn’t invent this scheme. There’s a project called Blockname, from Jeremie Miller, that works in exactly this way, using Bitcoin’s OP_RETURN transaction for DNS registrations on bitcoin’s blockchain. The only difference is that Blockname is performing domain name registrations, and I’m performing a mapping from usernames to hashes of public keys. I’ve also just been pointed at Blockstore, which is extremely similar.)

So to wrap up, we’ve created a global BitTorrent swarm of Git objects, and worked on user account registration so that we can go from a user experience that looks like this:

git clone gittorrent://

to this:

git clone gittorrent://81e24205d4bac8496d3e13282c90ead5045f09ea/foo

to this:

git clone gittorrent://cjb/foo

And at this point I think we’ve arrived at a decentralized replacement for the core feature of GitHub: finding and downloading Git repositories.

Closing thoughts

There’s still plenty more to do — for example, this doesn’t do anything with comments or issues or pull requests, which are all very important aspects of GitHub.

For issues, the solution I like is actually storing issues in files inside the code repository, which gives you nice properties like merging a branch means applying both the code changes and the issue changes — such as resolving an issue — on that branch. One implementation of this idea is Bugs Everywhere.

We could also imagine issues and pull requests living on Secure Scuttlebutt, which synchronizes append-only message streams across decentralized networks.

I’m happy just to have got this far, though, and I’d love to hear your comments on this design. The design of GitTorrent itself is (ironically enough) on GitHub and I’d welcome pull requests to make any aspect of it better.

I’d like to say a few thank yous — first to Feross Aboukhadijeh, who wrote the BitTorrent libraries that I’m using here. Feross’s enthusiasm for peer-to-peer and the way that he runs community around his “mad science” projects made me feel excited and welcome to contribute, and that’s part of why I ended up working on this project.

I’m also able to work on this because I’m taking time off from work at the moment to attend the Recurse Center in New York City. This is the place that used to be called “Hacker School” and it changed its name recently; the first reason for the name change was that they wanted to get away from the connotations of a school where people are taught things, when it’s really more like a retreat for programmers to improve their programming through project work for three months, and I’m very thankful to them for allowing me to attend.

The second reason they decided to change their name because their international attendees kept showing up at the US border and saying “I’m here for Hacker School!” and.. they didn’t have a good time.

Finally, I’d like to end with a few more words about why I think this type of work is interesting and important. There’s a certain grand, global scale of project, let’s pick GitHub and Wikipedia as exemplars, where the only way to have the project be able to exist at global scale after it becomes popular is to raise tens of millions of dollars a year, as GitHub and Wikipedia have, to spend running it, hoarding disks and bandwidth in big data centers. That limits the kind of projects we can create and imagine at that scale to those that we can make a business plan for raising tens of millions of dollars a year to run. I hope that having decentralized and peer to peer algorithms allows us to think about creating ambitious software that doesn’t require that level of investment, and just instead requires its users to cooperate and share with each other.

Thank you all very much for listening.

(You can check out GitTorrent on GitHub, and discuss it on Hacker News. You could also follow me on Twitter.)

-1:-- Announcing GitTorrent: A Decentralized GitHub (Post cjb)--L0--C0--May 29, 2015 04:23 PM

Pragmatic Emacs: Tweaking deft: quicker notes

I posted recently about using deft to make quick notes, and after using it for a bit I like it a lot, but wanted to make a few tweaks to the way it works. This gave me an excuse to learn a few lisp techniques, which other lisp novices might find useful.

I really like the way that org-capture lets me quickly make a note and return me seamlessly to where I was before, and so I wanted deft to be a bit more like that. By default, if I launch deft and make a note, I have to:

  • save the buffer
  • kill the buffer which takes me back to the deft menu
  • quit deft

This is too much work. Okay, I could save the note buffer and then switch back to my original buffer using e.g. winner-undo but that is still too much work!

Instead I’ve dabbled in a bit of lisp coding which I think illustrates a few nice ways you can customise your emacs with minimal lisp skills (like mine).

To start with I made my first attempt to advise a function. This is a way to make a function built into emacs or a package behave differently. Here I advise deft to save my window configuration before it launches:

;;advise deft to save window config
(defun bjm-deft-save-windows (orig-fun &rest args)
  (setq bjm-pre-deft-window-config (current-window-configuration))
  (apply orig-fun args)

(advice-add 'deft :around #'bjm-deft-save-windows)

Side note: in principal, I think something similar could be done using hooks, but my reading of the deft code suggested that the hooks would run after the window configuration had been changed, which is not what I wanted.

I then make a function to save the current buffer, kill the current buffer, kill the deft buffer, and then restore the pre-deft configuration. I then set up a shortcut for this function.

;;function to quit a deft edit cleanly back to pre deft window
(defun bjm-quit-deft ()
  "Save buffer, kill buffer, kill deft buffer, and restore window config to the way it was before deft was invoked"
  (switch-to-buffer "*Deft*")
  (when (window-configuration-p bjm-pre-deft-window-config)
    (set-window-configuration bjm-pre-deft-window-config)

(global-set-key (kbd "C-c q") 'bjm-quit-deft)

So now, I can launch deft with C-c d make a quick note and then quit with C-c q to get back to where I was. This is pleasingly close to the experience of org-capture.

Note that bjm-quit-deft is not bullet proof; there is nothing to stop you running it in a buffer that is not a note opened from deft, but if you do, nothing terrible will happen. If I was a better lisp programmer I could probably come up with a way to test if the current buffer was opened from deft and issue a warning from bjm-quit-deft if not, but I am not much of a lisp programmer!

More to follow on tweaking deft…

-1:-- Tweaking deft: quicker notes (Post Ben Maughan)--L0--C0--May 29, 2015 04:02 PM

Pragmatic Emacs: Expand region

I posted recently about cutting text by word, line and sentence, but by default most of the commands cut from the point to the beginning or end of the word/line/sentence. I previously posted a nice fix for cutting a whole line, but in this post I’ll cover a more general solution.

The package expand region expands the selected region by semantic units i.e. going from word to sentence to paragraph in prose, but also by sensible units for code as well.

If you use my recommended setup, prelude then the command you need is already there – just hit C-– and away you go.

Otherwise, install the package expand-region, and then add the following to your emacs config file

;;expand region
(require 'expand-region)
(global-set-key (kbd "C-=") 'er/expand-region)

I would make an animated gif to illustrate this, but there is a great emacs rocks video by the author of the package. Check out the other videos in that series for more good things.

-1:-- Expand region (Post Ben Maughan)--L0--C0--May 29, 2015 03:54 PM

Irreal: Avy

I've written several times about ace-jump-mode and how it's now my main navigation tool. I'm also a huge fan of ace-winodw and use it through the excellent hydra from abo-abo that I use to control all my window operations.

Abo-abo, it turns out has written a replacement for ace-jump, called Avy, that extends the its functionality. Abo-abo writes excellent software and is fastidious about maintaining it but ace-jump was working well for me and my inherent laziness kept me from switching. Then Artur Malabarba wrote about his upgrading to Avy and I was shamed into at least considering upgrading.

Finally, I realized that ace-window was based on avy and was therefore already installed on my machines. All I had to do was switch my key binding for ace-jump to avi-goto-word-1 and I would be using avy instead of ace-jump as well as having access to the rest of avy's functionality. You can take a look at avy's READ ME to see what some of that functionality is.

I also rebound 【Meta+g g】 and 【Meta+g Meta+g】 to avy-goto-line. It's a more featureful replacement for the built-in goto-line.

I've been using the new setup for a while now and am very happy with it. I finally deleted ace-jump from my packages so you can consider me all in now. If you ever have more than two windows open in Emacs you absolutely must have ace-window. Once you have ace-window, all the rest is available for free. I can't overstate how useful this package is. I can't imagine using Emacs without it now.

-1:-- Avy (Post jcs)--L0--C0--May 29, 2015 11:36 AM

(or emacs: lispy 0.26.0 is out

Lispy 0.25.0 came out 2 months ago; 177 commits later, comes version 0.26.0. The release notes are stored at Github, and I'll post them here as well.

The coolest changes are the new reader-based M, which:

  • Gives out very pretty output, with minor diffs for actual code, which is quite impressive considering all newline information is discarded and then reconstructed.
  • Works for things that Elisp can't read, like #<marker ...> etc, very useful for debugging.
  • Customizable rule sets; rules for Elisp and Clojure come with the package.

The improvements to g and G also great:

  • Because of caching, the prettified tags can be displayed in less than 0.15s on Emacs' lisp/ directory, which has 21256 tags in 252 files.
  • The tags collector looks at file modification time, so you get the updated tags right after you save.

The details for these and other features follow below.


  • C-k should delete the whole multi-line string.
  • y should work for all parens, not just (.
  • p should actually eval in other window for dolist.
  • Prevent pairs inserting an extra space when at minibuffer start.
  • ol works properly for active region.

New Features


  • xf will pretty-print the macros for Elisp.
  • M-m works better when before ).
  • Fix ', ^ after a ,.
  • Improve / (splice) for quoted regions.
  • Z works with &key arguments.
  • The new M is used in xf.
  • Allow to flatten Elisp defsubst.
  • c should insert an extra newline for top-level sexps.

Paredit key bindings

You can have only Paredit + special key bindings by using this composition of key themes:

(lispy-set-key-theme '(special paredit))

The default setting is:

(lispy-set-key-theme '(special lispy c-digits))

New algorithm for multi-lining

M is now bound to lispy-alt-multiline instead of lispy-multiline. It has a much better and more customizable algorithm.

See these variables for customization:

  • lispy-multiline-threshold
  • lispy--multiline-take-3
  • lispy--multiline-take-3-arg
  • lispy--multiline-take-2
  • lispy--multiline-take-2-arg

They are set to reasonable defaults. But you can customize them if you feel that a particular form should be multi-lined in a different way.

lispy-multiline-threshold is a bit of ad-hoc to make things nice. Set this to nil if you want a completely rigorous multi-line. With the default setting of 32, expressions shorter than this won't be multi-lined. This makes 95% of the code look really good.

The algorithm has a safety check implemented for Elisp: if read on the transformed expression returns something different than read on the original expression, an error will be signaled and no change will be made. For expressions that can't be read, like buffers/markers/windows/cyclic lists/overlays, only a warning will be issued (lispy can read them, unlike read).

d and > give priority to lispy-right

For the expression (a)|(b), (a) will be considered the sexp at point, instead of (b). This is consistent with show-paren-mode. If a space is present, all ambiguities are resolved anyway.

b works fine even if the buffer changes

I've switched the point and mark history to markers instead of points. When the buffer is changed, the markers are updated, so b will work fine.

Extend Clojure reader

In order for i (prettify code) to work for Clojure, it must be able to read the current expression. I've been extending the Elisp reader to understand Clojure. In the past commits, support was added for:

  • empty sets
  • commas
  • auto-symbols, like p1__7041#

Extend Elisp reader

It should be possible to read any #<...> form, as well as #1-type forms.

g and G get a persistent action for ivy

This is a powerful feature that the helm back end has had for a long time. When you press g, C-n and C-p will change the current selection. But C-M-n and C-M-p will change the current selection and move there, without exiting the completion.

This also means that you can call ivy-resume to resume either g (lispy-goto) or G (lispy-goto-local).

e works with defvar-local

As you might know, the regular C-x C-e or eval-buffer will not reset the values of defvar, defcustom and such (you need C-M-x instead). But e does it, now also for defvar-local.

Improve faces for dark backgrounds

I normally use a light background, so I didn't notice before that the faces looked horrible with a dark background.

The ` will quote the region

If you have a region selected, pressing ` will result in:


Customize the file selection back end for V

V (lispy-visit) allows to open a file in current project. Previously, it used projectile. Now it uses find-file-in-project by default, with the option to customize to projectile.

Fixup calls to looking-back

Apparently, looking-back isn't very efficient, so it's preferable to avoid it or at least add a search bound to improve efficiency. Also the bound became mandatory in 25, while it was optional before.

M-m will work better in strings and comments.

See the relevant test:

(should (string= (lispy-with "\"See `plu|mage'.\"" (kbd "M-m"))
                 "\"See ~`plumage'|.\""))

Thanks to this, to e.g. get the value of a quoted var in a docstring or a comment, or jump to its definition, you can M-m. Then, you can step-in with i to select the symbol without quotes.

Update the tags strategy

A much better algorithm with caching an examining of file modification time is used now. This means that the tags should be up-to-date 99% of the time, even immediately after a save, and no necessary re-parsing will be done. And it all works fine with the lispy-tag-arity modifications.

1% of the time, lispy-tag-arity stops working, I don't know why, since it's hard to reproduce. You can then pass a prefix arg to refresh tags bypassing the cache, e.g 2g or 2G.

Also a bug is fixed in Clojure tag navigation, where the tag start positions were off by one char.

The fetched tags retrieval is fast: less than 0.15s on Emacs' lisp/ directory to retrieve 21256 tags from 252 files. Which means it's lightning fast on smaller code bases (lispy has only 651 tags).

xj can also step into macros

lispy-debug-step-in, bound to xj locally and C-x C-j globally can now step into macros, as well as into functions. This command is very useful for Edebug-less debugging. Stepping into macros with &rest parameters should work fine as well.

p can now lax-eval function and macro arguments

When positioned at function or macro args, p will set them as if the function or macro was called with empty args, or the appropriate amount of nils. If the function is interned and interactive, use its interactive form to set the arguments appropriately.

Again, this is very useful for debugging.

Allow to paste anywhere in the list using a numeric arg

As you might know, P (lispy-paste) is a powerful command that:

  • Replaces selection with current kill when the region is active.
  • Yanks the current kill before or after the current list otherwise.

Now, you can:

  • Yank the current kill to become the second element of the list with 2P
  • Yank the current kill to become the third element of the list with 3P
  • ...

It's OK to pass a larger arg than the length of the current list. In that case, the paste will be made into the last element of the list.

Update the way / (lispy-splice) works

When there's no next element within parent, jump to parent from appropriate side. When the region is active, don't deactivate it. When splicing region, remove random quotes at region bounds.

This change makes the splice a lot more manageable. For example, starting with this Clojure code, with | marking the current point:

(defn read-resource
  "Read a resource into a string"
   |(slurp ( path))))

A double splice // will result in:

(defn read-resource
  "Read a resource into a string"
   slurp path))

After xR (reverse list), 2 SPC (same as C-f), -> (plain insert), [M (back to parent and multi-line), the final result:

(defn read-resource
  "Read a resource into a string"
  |(-> path

This also shows off xR - lispy-reverse, which reverses the current list. Finally, reverting from the last code to the initial one can be done simply with xf - it will flatten the -> macro call.


Thanks to all who contributed, enjoy the new stuff. Would also be nice to get some more feedback and bug reports. Currently, it might seem that a large part of the features are either perfect or unused.

-1:-- lispy 0.26.0 is out (Post)--L0--C0--May 28, 2015 10:00 PM

Irreal: Colors and Emacs in the Terminal

Here's a useful tip for those of you who run Emacs in a terminal.

-1:-- Colors and Emacs in the Terminal (Post jcs)--L0--C0--May 28, 2015 07:58 PM

punchagan: Say Howdy with Emacs!

Staying in touch with people is something I'm not very good at. Since I am not on popular (among my friends/family) networks – FB and Whatsapp – I don't even see random updates from people, to get some sense of being in touch.

I recently read some old posts by Sacha Chua and was inspired by how much code she had for contact management in her old blog posts. I was inspired by this post in particular to try and be more meticulous about how I stay in touch with people. Michael Fogleman blogged about his contact management work-flow using keepintouch. It seemed to do most of what I wanted, but I wanted this to be integrated with my org-contacts-db and I felt having native elisp code would make it easier to hook up email, chat, etc. to this.

I ended up writing a small utility called howdy to help me keep in touch with people. It currently has only a couple of features:

  • M-x howdy lets me update the last contacted timestamp for a contact.
  • Shows me contacts that I'm out of touch in the agenda, once I add the following snippet to an agenda file.
    * Howdy

I also have a few hooks to hook up jabber messages and email to update the db. I've added them to howdy-hooks.el in case anybody else wants to use them. They can also be used as examples to write other hooks. Feel free to contribute other hooks or suggest improvements. The library also ships with a modest test suite, that will hopefully make it easier for others to contribute.

I'm looking forward to experimenting with this over the next few weeks and improving it. Hopefully, it'll help me keep in touch, better than I do now.

-1:-- Say Howdy with Emacs! (Post punchagan)--L0--C0--May 28, 2015 01:09 PM

Endless Parentheses: New in Emacs 25.1: Asynchronous Package Menu

It was six months ago, to the day, when I alluded to the fact that Emacs’ package menu needed to go async. The time it took to do a simple list-packages bothered me the most, closely followed by having to go play Minesweeper every time I did a package upgrade. The latter was partially addressed when I added asynchronous package transactions to Paradox, but the former took a bit more work. In Emacs 25.1, at last, the package menu is going async.

You don’t need to do anything special to benefit from this. As soon as you issue M-x list-packages, instead of those “Contacting host: ...” messages which always foretell a many-second hang, the package menu will come up almost instantly. The download of archive information will go on in the background, and once it is done the new information is updated in place.

This has two big advantages.

  1. A fraction of a second after issuing the command you’re already in the menu, free to navigate, search, or mark stuff while the background download is happening.
  2. If you have multiple archives configured (which you should), they are fetched simultaneously. So the entire download will be 2–4 times faster now, even if you decide to sit and wait for it to finish.

It should be noted this only applies to refreshing. Package transactions (installation, upgrade, and deletion) are still synchronous in package.el. Async transactions where implemented for a while, but the outcome was quite far from satisfactory. However, so as not to end on a sad note, you can always go to Paradox for that.

Lastly, if you’re the kind of person that hates nice things, you can disable this feature with the package-menu-async variable.

That should be enough about (a)synchronicity for the moment. Come back on Monday, when we go into some big improvements on the filtering engine.

Comment on this.

-1:-- New in Emacs 25.1: Asynchronous Package Menu (Post)--L0--C0--May 28, 2015 12:00 AM

Julien Danjou: OpenStack Summit Liberty from a Ceilometer & Gnocchi point of view

Last week I was in Vancouver, BC for the OpenStack Summit, discussing the new Liberty version that will be released in 6 months.

I've attended the summit mainly to discuss and follow-up new developments on Ceilometer, Gnocchi and Oslo. It has been a pretty good week and we were able to discuss and plan a few interesting things.

Ops feedback

We had half a dozen Ceilometer sessions, and the first one was dedicated to getting feedbacks from operators using Ceilometer. We had a few operators present, and a few of the Ceilometer team. We had constructive discussion, and my feeling is that operators struggles with 2 things so far: scaling Ceilometer storage and having Ceilometer not killing the rest of OpenStack.

We discussed the first point as being addressed by Gnocchi, and I presented a bit Gnocchi itself, as well as how and why it will fix the storage scalability issue operators encountered so far.

Ceilometer putting down the OpenStack installation is more interesting problem. Ceilometer pollsters request information from Nova, Glance… to gather statistics. Until Kilo, Ceilometer used to do that regularly and at fixed interval, causing high pike load in OpenStack. With the introduction of jitter in Kilo, this should be less of a problem. However, Ceilometer hits various endpoints in OpenStack that are poorly designed, and hitting those endpoints of Nova or other components triggers a lot of load on the platform. Unfortunately, this makes operators blame Ceilometer rather than blaming the components being guilty of poor designs. We'd like to push forward improving these components, but it's probably going to take a long time.


When I started the Gnocchi project last year, I pretty soon realized that we would be able to split Ceilometer itself in different smaller components that could work independently, while being able to leverage each others. For example, Gnocchi can run standalone and store your metrics even if you don't use Ceilometer – nor even OpenStack itself.

My fellow developer Chris Dent had the same idea about splitting Ceilometer a few months ago and drafted a proposal. The idea is to have Ceilometer split in different parts that people could assemble together or run on their owns.

Interestingly enough, we had three 40 minutes sessions planned to talk and debate about this division of Ceilometer, though we all agreed in 5 minutes that this was the good thing to do. Five more minutes later, we agreed on which part to split. The rest of the time was allocated to discuss various details of that split, and I engaged to start doing the work with Ceilometer alarming subsystem.

I wrote a specification on the plane bringing me to Vancouver, that should be approved pretty soon now. I already started doing the implementation work. So fingers crossed, Ceilometer should have a new components in Liberty handling alarming on its own.

This would allow users for example to only deploys Gnocchi and Ceilometer alarm. They would be able to feed data to Gnocchi using their own system, and build alarms using Ceilometer alarm subsystem relying on Gnocchi's data.


We didn't have a Gnocchi dedicated slot – mainly because I indicated I didn't feel we needed one. We anyway discussed a few points around coffee, and I've been able to draw a few new ideas and changes I'd like to see in Gnocchi. Mainly changing the API contract to be more asynchronously so we can support InfluxDB more correctly, and improve Carbonara (the library we created to manipulate timeseries) based drivers to be faster.

All of those should – plus a few Oslo tasks I'd like to tackle – should keep me busy for the next cycle!

-1:-- OpenStack Summit Liberty from a Ceilometer &amp; Gnocchi point of view (Post Julien Danjou)--L0--C0--May 26, 2015 09:39 AM

Grant Rettke: New Favorite Programmer Interview Question

What is your key binding for performing a commit?
— Grant Rettke

-1:-- New Favorite Programmer Interview Question (Post Grant)--L0--C0--May 25, 2015 09:16 PM

Endless Parentheses: New in Emacs 25.1: User-selected packages

In Thurday's post on dependency management, I briefly mentioned that package.el now keeps track of which packages the user explicitly requested, and which were pulled in as dependencies. But there’s a bit more to this feature, so it deserves some time in the spolight.

Simply put, there is now a new custom variable package-selected-packages. This variable stores the names of packages installed explicitly by user. So every time you do M-x package-install or you do i x in the Package Menu, the name of that package gets added to this list. Packages which get pulled in as dependencies are not added to this list, and those which are explictly deleted get removed from the list. This is how package-autoremove knows what to remove, it just finds packages which (a) are not on this list and (b) are not required by anything else.

But this variable comes with other benefits too. First, the user can edit it manually with the usual customize-variable and use it to keep track of their list of wanted packages. Second, there’s now another command, package-install-selected-packages, which ensures that all packages on the list are installed. This means you can safely move to a new computer, or even just delete your elpa/ subdir. As long as you keep your custom settings you can just invoke the command and all your packages will be reinstalled.

There’s one small caveat, which some of you may have noticed. This bookkeeping is done during installation. So, when you finally upgrade to Emacs 25.1, how is it going to know which of your installed packages were user-selected and which were dependencies?

Well, it’s just impossible to know for sure, so it makes an educated guess. It takes all installed packages that are not required by any other installed package, and considers them to have been explicitly installed. This can (and probably will) yield both false positives and negatives, but that only happens the very first time you start Emacs 25. So just keep in mind you may need to customize-variable and fine-tune this list.

Comment on this.

-1:-- New in Emacs 25.1: User-selected packages (Post)--L0--C0--May 25, 2015 12:00 AM

Emacs Redux: Mastering Emacs (the first Emacs book in over a decade) is out

Mickey Petersen just released Mastering Emacs, the first new book about our beloved editor, since Learning GNU Emacs(released way back in 2004).

I haven’t had the time to read the book yet, but being familiar with Mickey’s work I have no doubt it’s outstanding. That’s all from me for now - go buy the book and start mastering Emacs.


I hope we won’t have to wait another decade for the next great Emacs book.

-1:-- Mastering Emacs (the first Emacs book in over a decade) is out (Post)--L0--C0--May 23, 2015 12:19 PM

(or emacs: Ivy-mode 0.5.0 is out

At this point, swiper is only a fraction of ivy-mode's functionality. Still, it's nice to keep them all, together with counsel, in a single repository: counsel-git-grep works much better this way.

Anyway, I'll echo the release notes here, there are quite a few exciting new features.


  • TAB shouldn't delete input when there's no candidate.
  • TAB should switch directories properly.
  • require dired when completing file names, so that the directory face is loaded.
  • TAB should work with confirm-nonexistent-file-or-buffer.
  • TAB should handle empty input.
  • work around grep-read-files: it should be possible to simply M-x rgrep RET RET RET.
  • Fix the transition from a bad regex to a good one - you can input a bad regex to get 0 candidates, the candidates come back once the regex is fixed.
  • ivy-switch-buffer should pre-select other-buffer just like switch-buffer does it.
  • Fix selecting "C:\" on Windows.
  • counsel-git-grep should warn if not in a repository.
  • C-M-n shouldn't try to call action if there isn't one.
  • Turn on sorting for counsel-info-lookup-symbol.
  • ivy-read should check for an outdated cons initial-input.

New Features

Out of order matching

I actually like in-order matching, meaning the input "in ma" will match "in-order matching", but not "made in". But the users can switch to out-of-order matching if they use this code:

(setq ivy-re-builders-alist
          '((t . ivy--regex-ignore-order)))

ivy-re-builders-alist is the flexible way to customize the regex builders per-collection. Using t here, means to use this regex builder for everything. You could choose to have in-order for files, and out-of-order for buffers and so on.

New defcustom: ivy-tab-space

Use this to have a space inserted each time you press TAB:

(setq ivy-tab-space t)

ignore case for TAB

"pub" can expand to "Public License".

New command: counsel-load-library

This command is much better than the standard load-libary that it upgrades. It applies a sort of uniquify effect to all your libraries, which is very useful:


In this case, I have avy installed both from the package manager and manually. I can easily distinguish them.

Another cool feature is that instead of using find-library (which is also bad, since it would report two versions of avy with the same name and no way to distinguish them), you can simply use counsel-load-library and type C-. instead of RET to finalize.

Here's another scenario: first load the library, then call ivy-resume and immediately open the library file.

New command: ivy-partial

Does a partial complete without exiting. Use this code to replace ivy-partial-or-done with this command:

(define-key ivy-minibuffer-map (kbd "TAB") 'ivy-partial)

Allow to use ^ in swiper

In regex terms, ^ is the beginning of line. You can now use this in swiper to filter your matches.

New command: swiper-avy

This command is crazy good: it combines the best features of swiper (all buffer an once, flexible input length) and avy (quickly select one candidate once you've narrowed to about 10-20 candidates).

For instance, I can enter "to" into swiper to get around 10 matches. Instead of using C-n a bunch of times to select the one of 10 that I want, I just press C-', followed by a or s or d ... to select one of the matches visible on screen.

So both packages use their best feature to cover up the others worst drawback.

Add support for virtual buffers

I was never a fan of recentf until now. The virtual buffers feature works in the same way as ido-use-virtual-buffers: when you call ivy-switch-buffer, your recently visited files as well as all your bookmarks are appended to the end of the buffer list.

Suppose you killed a buffer and want to bring it back: now you do it as if you didn't kill the buffer and instead buried it. The bookmarks access is also nice.

Here's how to configure it, along with some customization of recentf:

(setq ivy-use-virtual-buffers t)

(use-package recentf
  (setq recentf-exclude
        '("COMMIT_MSG" "COMMIT_EDITMSG" "github.*txt$"
  (setq recentf-max-saved-items 60))

Add a few wrapper commands for the minibuffer

All these commands just forward to their built-in counterparts, only trying not to exit the first line of the minibuffer.

  • M-DEL calls ivy-backward-kill-word
  • C-d calls ivy-delete-char
  • M-d calls ivy-kill-word
  • C-f calls ivy-forward-char

Allow to customize the minibuffer formatter

See the wiki on how to customize the minibuffer display to look like this:

100 Find file: ~/
> file3

When completing file names, TAB should defer to minibuffer-complete

Thanks to this, you can TAB-complete your ssh hosts, e.g.:

  • /ss TAB -> /ssh
  • /ssh:ol TAB -> /ssh:oleh@

More commands work with ivy-resume

I've added:

  • counsel-git-grep
  • counsel-git

Others (that start with counsel-) should work fine as well. Also don't forget that you can use C-M-n and C-M-p to:

  • switch candidate
  • call the action for the candidate
  • stay in the minibuffer

This is especially powerful for counsel-git-grep: you can easily check the whole repository for something with just typing in the query and holding C-M-n. The matches will be highlighted swiper-style, of course.

Allow to recenter during counsel-git-grep

Use C-l to recenter.

Update the quoting of spaces

Split only on single spaces, from all other space groups, remove one space.

As you might know, a space is used in place of .* in ivy. In case you want an actual space, you can now quote them even easier.


Thanks to all who contributed, check out the new stuff, and make sure to bind ivy-resume to something short: it has become a really nice feature.

-1:-- Ivy-mode 0.5.0 is out (Post)--L0--C0--May 22, 2015 10:00 PM

Ryan Rix: Automatically Re-set Emacs Environment

I use =gpg-agent= as an ssh agent as a means to use a Yubikey Neo PGP smartcard as physical login tokens. Without a pair of Yubikeys and their passphrases you can't log in to any of my assets or as me to any of my work assets. It's pretty great, but it relies on magic environment variables being propagated to the right location.
-1:-- Automatically Re-set Emacs Environment (Post)--L0--C0--May 20, 2015 12:00 AM

Thomas Fitzsimmons: EUDC Improvements

I use Emacs Unified Directory Client (EUDC) for completing email addresses from LDAP and BBDB databases. It’s nice to be able to complete names from LDAP when composing emails, obviously, but it’s also nice in Org mode to M-x eudc-expand-inline someone’s name into my notes.

When I first configured the EUDC LDAP backend for my environment — RHEL 6 ldapsearch, LDAP-over-SSL server — setup was very involved. There were lots of poor defaults, strange extra configuration files, function call requirements, and ldapsearch incompatibilities. EmacsWiki instructions were very long just to get sane “Givenname Surname <email@address>” completion in GNUS.

I filed a bug report with configuration simplifications, bug fixes and EUDC Info manual updates, and somehow I ended up as the EUDC maintainer. I’ve committed the improvements to the Emacs master branch; they’ll be released in Emacs 25.

If you’ve tried EUDC in the past and been turned off by its arcane configuration, you might want to re-read the “LDAP Configuration” section of the Info manual, and try again. If you still can’t get it working, file a bug report at bug-gnu-emacs and I’ll try to respond to it within a few days. Mention EUDC or LDAP in the subject of the report. Likewise, if you do get it working with hacks then let me know via a bug report.

-1:-- EUDC Improvements (Post Thomas Fitzsimmons)--L0--C0--May 17, 2015 04:09 AM

Ryan Rix: Practicing Stress Free Living

Something has changed in how I live my life the last three or four years. I've gone from being an incredibly independent person able to handle damn near anything thrown at me to someone who *struggles* with the day to day.
-1:-- Practicing Stress Free Living (Post)--L0--C0--May 17, 2015 12:00 AM

Emacs Redux: Learning Emacs Lisp

People who have been using Emacs for a while often develop the desire to learn Emacs Lisp, so they can customize Emacs more extensively, develop extra packages and create the ultimate editing experience, uniquely tailored to their needs & preferences.

There are a ton of Emacs Lisp resources our there, but most people generally need only one - the official Emacs Lisp manual. It’s bundled with Emacs and you can start reading right away by pressing C-h i m Elisp RET. If you’re relatively new to programming in general you might also check out the Introduction to Emacs Lisp (C-h i m Emacs Lisp Intro RET), before diving into the manual.

There are also plenty of Emacs Lisp tutorials online, but I’d advise against using them, as most of them have never been updated after originally published and Emacs Lisp keeps evolving all the time (albeit not as fast as I would have liked it to). That being said, Learn Emacs Lisp in 15 minutes is a short and sweet intro to the language. You can find more online educational resources on the EmacsWiki.

Trust me on this - any time invested in learning Emacs Lisp will be time well spent!

-1:-- Learning Emacs Lisp (Post)--L0--C0--May 16, 2015 11:37 AM

sachachua: 2015-05-13 Emacs Hangout

Console Emacs vs GUI Emacs, keybindings, Org Mode, cooking, nyan, window management, calendars, SuperCollider

Usual disclaimer: times are approximate, and the note-taker often gets distracted. =)

  • 0:00:00 Emacs configuration
  • 0:11:22 Console Emacs vs GUI Emacs? iTerm integration, mouse support, 256 colours, drop-down menus (although you can get a text one), …
  • 0:14:59 multihop TRAMP
  • 0:16:01 keybinding philosophies, Hyper and Super
  • 0:22:15 Remapping keys on Mac OS X (dealing with separate Alt and Meta)
  • 0:28:04 Org and mobile
  • 0:30:25 emulating hyper and super keys
  • 0:32:15 orgzly
  • 0:33:33 Org Mode and cooking, org-map-entries
  • 0:39:31 nyan
  • 0:43:04 One window, workgroups
  • 0:46:56 winner-mode
  • 0:53:30 rinari, zeus, ruby
  • 0:54:53 neotree
  • 0:58:22 keyboards
  • 1:03:24 conference
  • 1:09:22 calw; also, something about rainbow-mode, and palette, and then later Org Mode
  • 1:23:13 SuperCollider, Overtone, yasnippet
  • 1:45:13 blackink?

Text chat:

Here’s the gif I have as my nyan

Sahil Sinha 9:23 PM
Jack G. 9:24 PM (setq mac-right-command-modifier ‘hyper)<br>(setq mac-right-option-modifier ‘super) (global-set-key (kbd “H-h”) ‘er/expand-region
George Jones 9:32 PM
Jack G. 9:36 PM nyan Cranky_walk.gif
Jack G. 9:42 PM
me 9:42 PM ?
Daniel H 9:46 PM
me 9:48 PM winner-mode
George Jones 9:59 PM
George Jones 9:59 PM Xah Lee writes a LOT about keyboards
Jack G. 10:02 PM
Bogdan Popa 10:10 PM
me 10:11 PM org-gcal
Daniel H 10:12 PM
George Jones 10:12 PM having real trouble hearing…
George Jones 10:20 PM when you open a PDF in docview you can get the text with ^C^T (default bindings)
Jack G. 10:21 PM Thanks George!
George Jones 10:21 PM C-c C-t runs the command doc-view-open-text
me 10:27 PM
sai tejaa Cluri 10:27 PM hi
Jack G. 10:37 PM;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;uact=8&amp;ved=0CB4QtwIwAA&amp;url=;ei=QQpUVYvkDY7boATNoYBg&amp;usg=AFQjCNFWP2p0lzfnV9O8Ln8Xj700X64xpg&amp;sig2=imdZEfZtqo06MSjnb2i71Q
me 10:37 PM This was a fun demo of Org Mode and SuperCollider
Levi Strope 10:40 PM Jack your audio is crystal clear now… whatever that change was
Jack G. 10:45 PM
me 10:48 PM

The post 2015-05-13 Emacs Hangout appeared first on sacha chua :: living an awesome life.

-1:-- 2015-05-13 Emacs Hangout (Post Sacha Chua)--L0--C0--May 14, 2015 03:03 AM

Flickr tag 'emacs': 2015-05-13i Balancing remote and in-person Emacs talks -- index card #emacs #emacsconf

sachac posted a photo:

2015-05-13i Balancing remote and in-person Emacs talks -- index card #emacs #emacsconf

-1:-- 2015-05-13i Balancing remote and in-person Emacs talks -- index card #emacs #emacsconf (Post sachac ( 13, 2015 10:36 PM

Flickr tag 'emacs': 2015-05-13h Sorting through goals for the Emacs conference -- index card #emacs #emacsconf

sachac posted a photo:

2015-05-13h Sorting through goals for the Emacs conference -- index card #emacs #emacsconf

-1:-- 2015-05-13h Sorting through goals for the Emacs conference -- index card #emacs #emacsconf (Post sachac ( 13, 2015 10:36 PM

Raimon Grau: [ANN] - Helm-dash 1.2.1 released

It's been a long time since any announcement on helm-dash. Now that we hit 100stars, we are happy to release a new version which supports third party docsets.

Kapeli keeps a list of user contributed docsets which aren't officially supported but in a contrib repo.

As the way to fetch the docsets changes completely from the official ones, I created a kind of adapter that fetches the docsets info and offers a curated version through a very simple API. The app that manages this is called dashes-to-dashes .  Every 20something minutes it updates the list of user contributed docsets, and makes it available for any helm-dash user who runs `m-x helm-dash-install-user-docset'.  The official ones are still available on 'm-x helm-dash-install-docset'.

There are some improvements in the support for windows, and a few bugfixes here and there.

Also, there's a branch waiting to land that will improve the userdoc trimming. When searching "rails | controller" it'll narrow the search just to 'ruby on rails' docset.

Stay tunned, and thanks for the support!
-1:-- [ANN] - Helm-dash 1.2.1 released (Post Raimon Grau ( 13, 2015 12:14 AM

Julien Danjou: My interview about software tests and Python

I've recently been contacted by Johannes Hubertz, who is writing a new book about Python in German called "Softwaretests mit Python" which will be published by Open Source Press, Munich this summer. His book will feature some interviews, and he was kind enough to let me write a bit about software testing. This is the interview that I gave for his book. Johannes translated to German and it will be included in Johannes' book, and I decided to publish it on my blog today. Following is the original version.

How did you come to Python?

I don't recall exactly, but around ten years ago, I saw more and more people using it and decided to take a look. Back then, I was more used to Perl. I didn't really like Perl and was not getting a good grip on its object system.

As soon as I found an idea to work on – if I remember correctly that was rebuildd – I started to code in Python, learning the language at the same time.

I liked how Python worked, and how fast I was to able to develop and learn it, so I decided to keep using it for my next projects. I ended up diving into Python core for some reasons, even doing things like briefly hacking on projects like Cython at some point, and finally ended up working on OpenStack.

OpenStack is a cloud computing platform entirely written in Python. So I've been writing Python every day since working on it.

That's what pushed me to write The Hacker's Guide to Python in 2013 and then self-publish it a year later in 2014, a book where I talk about doing smart and efficient Python.

It had a great success, has even been translated in Chinese and Korean, so I'm currently working on a second edition of the book. It has been an amazing adventure!

Zen of Python: Which line is the most important for you and why?

I like the "There should be one – and preferably only one – obvious way to do it". The opposite is probably something that scared me in languages like Perl. But having one obvious way to do it is and something I tend to like in functional languages like Lisp, which are in my humble opinion, even better at that.

For a python newbie, what are the most difficult subjects in Python?

I haven't been a newbie since a while, so it's hard for me to say. I don't think the language is hard to learn. There are some subtlety in the language itself when you deeply dive into the internals, but for beginners most of the concept are pretty straight-forward. If I had to pick, in the language basics, the most difficult thing would be around the generator objects (yield).

Nowadays I think the most difficult subject for new comers is what version of Python to use, which libraries to rely on, and how to package and distribute projects. Though things get better, fortunately.

When did you start using Test Driven Development and why?

I learned unit testing and TDD at school where teachers forced me to learn Java, and I hated it. The frameworks looked complicated, and I had the impression I was losing my time. Which I actually was, since I was writing disposable programs – that's the only thing you do at school.

Years later, when I started to write real and bigger programs (e.g. rebuildd), I quickly ended up fixing bugs… I already fixed. That recalled me about unit tests and that it may be a good idea to start using them to stop fixing the same things over and over again.

For a few years, I wrote less Python and more C code and Lua (for the awesome window manager), and I didn't use any testing. I probably lost hundreds of hours testing manually and fixing regressions – that was a good lesson. Though I had good excuses at that time – it is/was way harder to do testing in C/Lua than in Python.

Since that period, I have never stopped writing "tests". When I started to hack on OpenStack, the project was adopting a "no test? no merge!" policy due to the high number of regressions it had during the first releases.

I honestly don't think I could work on any project that does not have – at least a minimal – test coverage. It's impossible to hack efficiently on a code base that you're not able to test in just a simple command. It's also a real problem for new comers in the open source world. When there are no test, you can hack something and send a patch, and get a "you broke this" in response. Nowadays, this kind of response sounds unacceptable to me: if there is no test, then I didn't break anything!

In the end, it's just too much frustration to work on non tested projects as I demonstrated in my study of whisper source code.

What do you think to be the most often seen pitfalls of TDD and how to avoid them best?

The biggest problems are when and at what rate writing tests.

On one hand, some people starts to write too precise tests way too soon. Doing that slows you down, especially when you are prototyping some idea or concept you just had. That does not mean that you should not do test at all, but you should probably start with a light coverage, until you are pretty sure that you're not going to rip every thing and start over. On the other hand, some people postpone writing tests for ever, and end up with no test all or a too thin layer of test. Which makes the project with a pretty low coverage.

Basically, your test coverage should reflect the state of your project. If it's just starting, you should build a thin layer of test so you can hack it on it easily and remodel it if needed. The more your project grow, the more you should make it sold and lay more tests.

Having too detailed tests is painful to make the project evolve at the start. Having not enough in a big project makes it painful to maintain it.

Do you think, TDD fits and scales well for the big projects like OpenStack?

Not only I think it fits and scales well, but I also think it's just impossible to not use TDD in such big projects.

When unit and functional tests coverage was weak in OpenStack – at its beginning – it was just impossible to fix a bug or write a new feature without breaking a lot of things without even noticing. We would release version N, and a ton of old bugs present in N-2 – but fixed in N-1 – were reopened.

For big projects, with a lot of different use cases, configuration options, etc, you need belt and braces. You cannot throw code in a repository thinking it's going to work ever, and you can't afford to test everything manually at each commit. That's just insane.

-1:-- My interview about software tests and Python (Post Julien Danjou)--L0--C0--May 11, 2015 01:39 PM

Chen Bin (redguardtoo): Emacs speed up 1000%

I'm still NOT satisfied with my Emacs performance after applying below tricks:

  • autoload packages
  • idle-load packages
  • compiling *.el to *.elc

After some research, I found I could make my Emacs 1000% fast in 1 minute.

Please note I'm talking about the general performance not just startup time.

The solution is really simple.

Since I'm a Linux guy and my computer got enough (24G) memory. I can place my setup on memory only.

Step 1, insert below line into /etc/fstab and restart computer:

tmpfs       /tmp        tmpfs       nodev,nosuid,size=8G    0   0

Step 2, run the script "emacs2ram":


if [ -z "$1" ];then
    echo "Usage:"
    echo "  emacs2ram start"
    echo "  emacs2ram restore"
    exit 1

if [ "$1" == "start" ];then

    set -efu

    cd ~/

    if [ ! -r $volatile ]; then
        mkdir -m0700 $volatile

    # link -> volatie does not exist
    if [ "$(readlink $link)" != "$volatile" ]; then
        # backup project at first
        mv $link $backup
        # create the link
        ln -s $volatile $link

    if [ -e $link/.unpacked ]; then
        echo "Sync .emacs.d from memory to backup ..."
        rsync -avq --delete --exclude .unpacked ./$link/ ./$backup/
        echo "DONE!"
        echo "Sync .emacs.d from disk to memory ..."
        rsync -avq ./$backup/ ./$link/
        touch $link/.unpacked
        echo "DONE!"
    echo "Moving .emacs.d back to disk ..."
    cd ~/projs
    rm $link && mv $backup $link && rm -rf $volatile
    echo "DONE!"

That's all! Please enjoy Emacs as usual.

The original script is from ArchLinux Wiki. I learned this technique eight years ago. I'm just wondering why I need eight years to apply it?

BTW, I've also moved all my projects into memory, using similar scripts.

UPDATE: I also publicize my project-managing script at gist. It's almost same as emacs2ram.

-1:-- Emacs speed up 1000% (Post Chen Bin)--L0--C0--May 08, 2015 11:58 AM

Jorgen Schäfer: Elpy 1.8.0 released

I just released version 1.8.0 of Elpy, the Emacs Python Development Environment. This is a feature release.

Elpy is an Emacs package to bring powerful Python editing to Emacs. It combines a number of other packages, both written in Emacs Lisp as well as Python.

Quick Installation

Evaluate this:

(require 'package)
(add-to-list 'package-archives
'("elpy" .

Then run M-x package-install RET elpy RET.

Finally, run the following (and add them to your .emacs):


Changes in 1.8.0

  • Emacs 24.5 is now officially supported
  • The new configuration option elpy-rpc-ignored-buffer-size defines a maximum buffer size to be handle completion in, to avoid laggy interaction in unusually large files
  • Indentation block movement was replaced with code that just moves the marked block or the current line; this should be a lot less magical and more predictable
  • Running the test at point now correctly ignores any inner methods
  • Jedi docstrings now show the full name of the object
  • The RPC interpreter is now chosen correctly on cygwin
  • elpy-shell-send-region-or-buffer now warns of tabs in the data being sent
  • Elpy now binds stdout and stderr to /dev/null to avoid being confused by spurious output from other libraries
  • RPC buffers (and processes) are removed after some time to avoid them piling up endlessly
  • It is not possibly anymore to use customize alone to use ipython, because of some bad interaction between custom options in Elpy and python.el
  • And lots of bugfixes (50 issues closed!)

Thanks to Aaron Schumacher, Clément Pit–Claudel, Georg Brandl, Pierre Allix, Roshan Shariff and Simen Heggestøyl for their contributions!

-1:-- Elpy 1.8.0 released (Post Jorgen Schäfer ( 02, 2015 02:45 PM

Phillip Lord: lentic 0.9

Lentic is a package which implements lenticular text — two Emacs buffers that contain the same content, but are otherwise independent. Unlike indirect-buffers, which must contain absolutely identical strings, lentic buffers can contain different text, with a transformation between the two.

It was not my original plan to have another release so soon after the last release [1]. However, the work that I had planned for that release turned out to be very-straightforward.

For this release, introduces a new form of buffer which is an unmatched block buffer. The details do not matter — the practical upshot is that with, for example, org-mode it is now possible to have more than one style of source block. In my examples directory, I have an org-mode file with “hello world” in three different languages (Clojure, Python and Emacs-Lisp). When lentic, you get four views, each in a different mode, and syntactically correct. Not a use I think I would suggest, but a nice demonstration.

Lentic is now available on MELPA, MELPA stable and github.


  1. P. Lord, "lentic 0.8", An Exercise in Irrelevance, 2015.
-1:-- lentic 0.9 (Post Phillip Lord)--L0--C0--May 01, 2015 09:30 PM

emacspeak: Announcing Emacspeak 42.0 (AnswerDog)

Emacspeak 42.0—AnswerDog—Unleashed!

1 Emacspeak-42.0 (AnswerDog) Unleashed!

** For Immediate Release:

San Jose, Calif., (May 1, 2015) Emacspeak: Redefining Accessibility In The Era Of Internet Computing –Zero cost of upgrades/downgrades makes priceless software affordable!

Emacspeak Inc (NASDOG: ESPK) --– announces the immediate world-wide availability of Emacspeak 42.0 (AnswerDog) –a powerful audio desktop for leveraging today's evolving data, social and service-oriented Internet cloud.

1.1 Investors Note:

With several prominent tweeters expanding coverage of #emacspeak, NASDOG: ESPK has now been consistently trading over the social net at levels close to that once attained by DogCom high-fliers—and as of May 2015 is trading at levels close to that achieved by once better known stocks in the tech sector.

1.2 What Is It?

Emacspeak is a fully functional audio desktop that provides complete eyes-free access to all major 32 and 64 bit operating environments. By seamlessly blending live access to all aspects of the Internet such as Web-surfing, blogging, social computing and electronic messaging into the audio desktop, Emacspeak enables speech access to local and remote information with a consistent and well-integrated user interface. A rich suite of task-oriented tools provides efficient speech-enabled access to the evolving service-oriented social Internet cloud.

1.3 Major Enhancements:

  • Emacs EWW: Consume Web content efficiently. 🕷
  • Updated Info manual 🕮
  • SoX integration for generating auditory feedback ℗
  • Speech-enabled Elfeed, an Emacs Feed Reader 🗞
  • CSound generated 3d Auditory Icons ⟀
  • Audacious — An Audio Workbench using SoX 🝧
  • Audio presets for MPlayer using Ladspa filters ♮
  • emacspeak-url-templates: Smart Web access. ♅
  • Integrated TuneIn Radio search, browse and play 📻
  • emacspeak-websearch.el Find things fast. ♁
  • Calibre integration for searching and viewing epub 📚 📔
  • Complete anything via company integration ∁
  • Emacs 24.4: Supports all new features in Emacs 24.4. 🌚
  • And a lot more than wil fit this margin. …

1.4 Establishing Liberty, Equality And Freedom:

Never a toy system, Emacspeak is voluntarily bundled with all major Linux distributions. Though designed to be modular, distributors have freely chosen to bundle the fully integrated system without any undue pressure—a documented success for the integrated innovation embodied by Emacspeak. As the system evolves, both upgrades and downgrades continue to be available at the same zero-cost to all users. The integrity of the Emacspeak codebase is ensured by the reliable and secure Linux platform used to develop and distribute the software.

Extensive studies have shown that thanks to these features, users consider Emacspeak to be absolutely priceless. Thanks to this wide-spread user demand, the present version remains priceless as ever—it is being made available at the same zero-cost as previous releases.

At the same time, Emacspeak continues to innovate in the area of eyes-free social interaction and carries forward the well-established Open Source tradition of introducing user interface features that eventually show up in luser environments.

On this theme, when once challenged by a proponent of a crash-prone but well-marketed mousetrap with the assertion "Emacs is a system from the 70's", the creator of Emacspeak evinced surprise at the unusual candor manifest in the assertion that it would take popular idiot-proven interfaces until the year 2070 to catch up to where the Emacspeak audio desktop is today. Industry experts welcomed this refreshing breath of Courage Certainty and Clarity (CCC) at a time when users are reeling from the Fear Uncertainty and Doubt (FUD) unleashed by complex software systems backed by even more convoluted press releases.

1.5 Independent Test Results:

Independent test results have proven that unlike some modern (and not so modern) software, Emacspeak can be safely uninstalled without adversely affecting the continued performance of the computer. These same tests also revealed that once uninstalled, the user stopped functioning altogether. Speaking with Aster Labrador, the creator of Emacspeak once pointed out that these results re-emphasize the user-centric design of Emacspeak; "It is the user –and not the computer– that stops functioning when Emacspeak is uninstalled!".

1.5.1 Note from Aster,Bubbles and Tilden:

UnDoctored Videos Inc. is looking for volunteers to star in a video demonstrating such complete user failure.

1.6 Obtaining Emacspeak:

Emacspeak can be downloaded from GitHub –see you can visit Emacspeak on the WWW at You can subscribe to the emacspeak mailing list by sending mail to the list request address The Emacspeak Blog is a good source for news about recent enhancements and how to use them.

The latest development snapshot of Emacspeak is always available via Git from GitHub at Emacspeak GitHub .

1.7 History:

Emacspeak 42.0 while moving to GitHub from Google Code continues to innovate in the areas of auditory user interfaces and efficient, light-weight Internet access. Emacspeak 41.0 continues to improve upon the desire to provide not just equal, but superior access — technology when correctly implemented can significantly enhance the human ability. Emacspeak 40.0 goes back to Web basics by enabling efficient access to large amounts of readable Web content. Emacspeak 39.0 continues the Emacspeak tradition of increasing the breadth of user tasks that are covered without introducing unnecessary bloatware. Emacspeak 38.0 is the latest in a series of award-winning releases from Emacspeak Inc. Emacspeak 37.0 continues the tradition of delivering robust software as reflected by its code-name. Emacspeak 36.0 enhances the audio desktop with many new tools including full EPub support — hence the name EPubDog. Emacspeak 35.0 is all about teaching a new dog old tricks — and is aptly code-named HeadDog in honor of our new Press/Analyst contact. emacspeak-34.0 (AKA Bubbles) established a new beach-head with respect to rapid task completion in an eyes-free environment. Emacspeak-33.0 AKA StarDog brings unparalleled cloud access to the audio desktop. Emacspeak 32.0 AKA LuckyDog continues to innovate via open technologies for better access. Emacspeak 31.0 AKA TweetDog — adds tweeting to the Emacspeak desktop. Emacspeak 30.0 AKA SocialDog brings the Social Web to the audio desktop—you cant but be social if you speak! Emacspeak 29.0—AKAAbleDog—is a testament to the resilliance and innovation embodied by Open Source software—it would not exist without the thriving Emacs community that continues to ensure that Emacs remains one of the premier user environments despite perhaps also being one of the oldest. Emacspeak 28.0—AKA PuppyDog—exemplifies the rapid pace of development evinced by Open Source software. Emacspeak 27.0—AKA FastDog—is the latest in a sequence of upgrades that make previous releases obsolete and downgrades unnecessary. Emacspeak 26—AKA LeadDog—continues the tradition of introducing innovative access solutions that are unfettered by the constraints inherent in traditional adaptive technologies. Emacspeak 25 —AKA ActiveDog —re-activates open, unfettered access to online information. Emacspeak-Alive —AKA LiveDog —enlivens open, unfettered information access with a series of live updates that once again demonstrate the power and agility of open source software development. Emacspeak 23.0 – AKA Retriever—went the extra mile in fetching full access. Emacspeak 22.0 —AKA GuideDog —helps users navigate the Web more effectively than ever before. Emacspeak 21.0 —AKA PlayDog —continued the Emacspeak tradition of relying on enhanced productivity to liberate users. Emacspeak-20.0 —AKA LeapDog —continues the long established GNU/Emacs tradition of integrated innovation to create a pleasurable computing environment for eyes-free interaction. emacspeak-19.0 –AKA WorkDog– is designed to enhance user productivity at work and leisure. Emacspeak-18.0 –code named GoodDog– continued the Emacspeak tradition of enhancing user productivity and thereby reducing total cost of ownership. Emacspeak-17.0 –code named HappyDog– enhances user productivity by exploiting today's evolving WWW standards. Emacspeak-16.0 –code named CleverDog– the follow-up to SmartDog– continued the tradition of working better, faster, smarter. Emacspeak-15.0 –code named SmartDog–followed up on TopDog as the next in a continuing a series of award-winning audio desktop releases from Emacspeak Inc. Emacspeak-14.0 –code named TopDog–was the first release of this millennium. Emacspeak-13.0 –codenamed YellowLab– was the closing release of the 20th. century. Emacspeak-12.0 –code named GoldenDog– began leveraging the evolving semantic WWW to provide task-oriented speech access to Webformation. Emacspeak-11.0 –code named Aster– went the final step in making Linux a zero-cost Internet access solution for blind and visually impaired users. Emacspeak-10.0 –(AKA Emacspeak-2000) code named WonderDog– continued the tradition of award-winning software releases designed to make eyes-free computing a productive and pleasurable experience. Emacspeak-9.0 –(AKA Emacspeak 99) code named BlackLab– continued to innovate in the areas of speech interaction and interactive accessibility. Emacspeak-8.0 –(AKA Emacspeak-98++) code named BlackDog– was a major upgrade to the speech output extension to Emacs.

Emacspeak-95 (code named Illinois) was released as OpenSource on the Internet in May 1995 as the first complete speech interface to UNIX workstations. The subsequent release, Emacspeak-96 (code named Egypt) made available in May 1996 provided significant enhancements to the interface. Emacspeak-97 (Tennessee) went further in providing a true audio desktop. Emacspeak-98 integrated Internetworking into all aspects of the audio desktop to provide the first fully interactive speech-enabled WebTop.

1.8 About Emacspeak:

Originally based at Cornell (NY) –home to Auditory User Interfaces (AUI) on the WWW– Emacspeak is now maintained on GitHub -- —and Sourceforge — The system is mirrored world-wide by an international network of software archives and bundled voluntarily with all major Linux distributions. On Monday, April 12, 1999, Emacspeak became part of the Smithsonian's Permanent Research Collection on Information Technology at the Smithsonian's National Museum of American History.

The Emacspeak mailing list is archived at Vassar –the home of the Emacspeak mailing list– thanks to Greg Priest-Dorman, and provides a valuable knowledge base for new users.

1.9 Press/Analyst Contact: Tilden Labrador

Going forward, Tilden acknowledges his exclusive monopoly on setting the direction of the Emacspeak Audio Desktop, and promises to exercise this freedom to innovate and her resulting power responsibly (as before) in the interest of all dogs.

**About This Release:

Windows-Free (WF) is a favorite battle-cry of The League Against Forced Fenestration (LAFF). –see for details on the ill-effects of Forced Fenestration.

CopyWrite )C( Aster and Hubbell Labrador. All Writes Reserved. HeadDog (DM), LiveDog (DM), GoldenDog (DM), BlackDog (DM) etc., are Registered Dogmarks of Aster, Hubbell and Tilden Labrador. All other dogs belong to their respective owners.

Author: T.V Raman

Created: 2015-04-30 Thu 15:35

Emacs (Org mode 8.2.10)


-1:-- Announcing Emacspeak 42.0 (AnswerDog) (Post T. V. Raman ( 30, 2015 10:37 PM

sachachua: 2015-04-30 Emacs Hangout – hosted by Philip Stark

Thanks to Philip Stark for organizing an Emacs Hangout that’s more conducive to European timezones! Here’s the video and the notes.

You can add more comments on the event page. For more about upcoming Hangouts, check out our Google+ page.

Show notes (times might need a little adjustment):

  • 0:00:03 Introductions!
  • 0:03:03 A couple of Emacs semi-newbies =)
  • 0:03:52 Java and C# language support (autocomplete, refactoring, etc.); bridging the gap between Emacs and the runtime (Unity, Android, etc.). Batch mode for the latter. OmniSharp actually went pretty darn well this time around!
  • 0:06:10 OmniSharp demo
  • 0:06:13
  • 0:07:23 OmniSharp + company config, demo of completion. Includes API. Jump to definition as well.
  • 0:10:59 Cool refactoring stuff. Ex: intelligent rename. Watch out for bugs. Still neat!
  • 0:12:30 MS Visual Studio Code seems to run on the same backend =)
  • 0:13:18 OmniSharp background info
  • 0:14:53 New participant, working out the tech issues
  • 0:16:28 Java? Haven’t looked into it much yet, lower priority. Pain point: Eclipse project build chain. eclim? May give it a second chance.
  • 0:19:37 Wishlist: batch mode Unity for headless testing?
  • 0:20:05 Emacs and Python – working through the Google Code Jam problems. C-c C-c to execute code in the REPL, so much fun. Suggestion: org-babel blocks? =)
  • 0:21:37 Discussion about Scala and Ensime. Ooh, Ensime does Java too. Neat!
  • 0:22:49 New to Emacs Lisp. Discovering things and implementing them – good enough, but not well-polished. Writing. Helm, etc. So many things to learn! Balancing studying the Emacs Lisp intro and manual, and discovering things day to day.
  • 0:25:00 Separate Lisp file loading for experimental stuff.
  • 0:25:30 Woodnotes guide?
  • 0:26:32 Emacs StackExchange,, Planet Emacsen
  • 0:29:17 Spacemacs, packaged defaults. Learning with index cards. Learning curve. Emacs community is obsessed with documentation. Phenomenal! =)
  • 0:34:23 Documented conventions, nicely-designed keybindings etc. for Spacemacs
  • 0:35:50 Spacemacs setup asks you which tradition you want to follow
  • 0:36:46 nerdtree replacement – neotree
  • 0:37:27 Goal is to not rely on Spacemacs, but for it to be a stepping-stone / scaffold
  • 0:38:30 Differences between Linux window managers; simplified workflows
  • 0:40:24 Looking at configuration frameworks piecemeal, learning workflows
  • 0:43:05 Discoverability is a big issue. helm-c-yasnippet has helm-yas-complete, helm-yas-create-snippet-on-region . Can be configured to display the keys. (setq helm-yas-display-key-on-candidate t) Has additional actions if you TAB.
  • 0:50:24 Hydra demo. Ex: moving lines up and down. Hydra for Helm?
  • 0:57:25 Lispy-mnemonic
  • 1:02:58 Usability
  • 1:05:30 Lispy-mnemonic workflow – minor mode
  • 1:06:15 back-to-indentation and restoring the binding in Lispy
  • 1:07:36 org-timer and meeting notes
  • 1:08:14 Make timestamps better! =)
  • 1:10:53 Cognitive overhead of new IDEs. Ex: SublimeText C-d marks a thing (Emacs equivalent: expand-region)? More organic, flexible commands versus specific ones, staying within your mental model.
  • 1:13:00 multiple-cursors, transpose-chars versus backward-kill-word.
  • 1:17:30 helm-swoop
  • 1:21:06 micro-optimizations, command-log-mode, keyfreq, mc/mark-all-like-this(-dwim), guru-mode
  • 1:26:06 Dealing with Eclipse wizards, things that shift you out of your mental model. Discussion of Helm and Ido. Also, helm-show-kill-ring.
  • 1:31:59 Hydra and leader keys.
  • 1:32:31 Dan’s intro. Figuring out workflow. Export Org Mode to HTML. Yasnippet for HTML5 declarations? Org Mode publishing project support (org) Publishing options
  • 1:37:20 Magit, git-timemachine, git-gutter-fringe, git-wip (for committing work in progress each time you save;, git-wip-timemachine (forked git-wip,
  • 1:41:44 undo-tree
  • 1:42:57 git-messenger
  • 1:43:48 C-x v g, vc-annotate, colour-coding
  • 1:45:13 Emacs load times, profiling
  • 1:47:59 markdown and flycheck not finding an external command, checking the *Messages* buffer

Text chat:

M. Ian Graham 2:06 PM
M. Ian Graham 2:14 PM
Tim K 2:15 PM should be ok
Tim K 2:15 PM maybe someone has to unmute me
M. Ian Graham 2:15 PM
Tim K 2:15 PM i’ll just keep lurking for now then
me 2:18 PM
Tim K 2:19 PM tangentially related: ENSIME I used it for developing a web play framework project
M. Ian Graham 2:20 PM Ooo, scala goodness
Tim K 2:21 PM yeah it targets scala BUT it works for java as well !!
Philip Stark 2:23 PM Excellent.. Thank you Tim !
M. Ian Graham 2:25 PM
Tim K 2:25 PM @Will: Are you on Emacs.SE?
Philip Stark 2:26 PM right?
Tim K 2:26 PM yes There’s lots of good content for non-programmers there
Philip Stark 2:26 PM cool. I gotta check that out.
me 2:27 PM Yakshaving:
Tim K 2:32 PM For people who know their way around some of the starter kits: You could definitely score some points answering questions on Emacs.SE. My impression is that there usually aren’t that many people around who can answer these types of questions.
me 2:32 PM Good point!
Will Monroe 2:33 PM Thanks, Tim. That sounds like a good place for someone like me to start.
Tim K 2:33 PM Prelude is probably the one you’re thinking of
M. Ian Graham 2:45 PM
me 2:47 PM (setq helm-yas-display-key-on-candidate t)
Will Monroe 2:58 PM Hey everyone, I’ve really enjoyed listening to and talking with each of you. Have to go. See you all next time!
Tim K 3:03 PM Bye Will!
me 3:04 PM For the text chat: You might like
Tim K 3:12 PM Also: multiple cursors
me 3:23 PM keyfreq?
Tim K 3:25 PM guru-mode ?
me 3:41 PM
Tim K 3:41 PM
me 3:43 PM
Philip Stark 3:43 PM ah thx

Thanks, everyone!

The post 2015-04-30 Emacs Hangout – hosted by Philip Stark appeared first on sacha chua :: living an awesome life.

-1:-- 2015-04-30 Emacs Hangout – hosted by Philip Stark (Post Sacha Chua)--L0--C0--April 30, 2015 09:25 PM

Sebastian Wiesner: Configuring buffer display in Emacs

I guess every Emacs user knows this particular phenomenon: Windows constantly pop up at almost, but not quite, entirely undesired places. It is a surprisingly hard challenge to make Emacs display buffers in a sane way. Packages like winner and pop-win tell stories about the pain of generations of Emacs users. Well, it used to be a hard, but now it became much easier in Emacs 24.1 with the new display-buffer-alist option.

This option maps regular expressions for buffer names to actions which tell Emacs how to display a buffer with a matching name. This sounds a little abstract at first, so let’s see this in action. The following piece of code adds a mapping for Flycheck’s error list:

(add-to-list 'display-buffer-alist
             `(,(rx bos "*Flycheck errors*" eos)
               (reusable-frames . visible)
               (side            . bottom)
               (window-height   . 0.4)))

Let’s dissect this:

  1. The first item is a regular expression which matches the buffer name of Flycheck’s error list. I use rx since I can never remember whether it’s \` or \' for the beginning of a string :)
  2. Next follows a list of display functions. Emacs tries these functions in the order of appearance to create a window for the buffer.
  3. The remaining elements are cons cells of options to fine-tune the behaviour of the display functions.

In our example we start with display-buffer-reuse-window which will reuse an existing window which already shows the buffer: If the error list is already visible in some window, there’s no need to show it twice. By default, this function only considers windows in the current frame, but with the option (reusable-frames . visible) we extend it’s scope to all visible frames. I don’t need to see the error list twice, if it’s already shown in another frame on my secondary monitory.

If there’s no existing window with the error list Emacs will try the next function, in our case display-buffer-in-side-window. This function creates a special “side” window for the error list. A side windows always fixed to a specific side of the frame and cannot be moved or splitted. It behaves much like the “dock windows” known from popular IDEs like IntelliJ or Visual Studio.

With the options (side . bottom) and (height . 0.4), display-buffer-in-side-window creates a side window at the bottom of the frame with a height of 40% of the frame’s height. If there’s already a side window at the bottom of the current frame, display-buffer-in-side-window replaces the existing side window with new side window for the error list.

In other words, Flycheck’s error list always pops up at the bottom of the current frame now, occupying 40% of its height, just like error lists in Visual Studio or IntelliJ.

I like to combine this feature with this little command:

(defun lunaryorn-quit-bottom-side-windows ()
  "Quit side windows of the current frame."
  (dolist (window (window-at-side-list))
    (quit-window nil window)))

(global-set-key (kbd "C-c q") #’lunaryorn-quit-bottom-side-windows)

Now I can press C-c ! l to show a list of all Flycheck errors at the bottom of my frame, and C-c q to close it again :)

-1:-- Configuring buffer display in Emacs (Post)--L0--C0--April 29, 2015 12:00 AM

Phillip Lord: lentic 0.8

Lentic is a package which implements lenticular text — two Emacs buffers that contain the same content, but are otherwise independent. Unlike indirect-buffers, which must contain absolutely identical strings, lentic buffers can contain different text, with a transformation between the two.

This has several uses. Firstly, it allows a form of multi-modal editing, where each lentic buffer shows the text in a different mode. For example, this can be used to edit literate Haskell code. This should work with indirect-buffers, but in practice does not because the buffers share text-properties. These are a feature of the buffer strings in Emacs, and are used by some modes for their functionality; when two modes work on the same string, each tends to reset the text properties of the other.

It is possible to take this form of multi-modal editing further, where the different buffers contain different syntax. So, for example, one buffer might be in fully valid Emacs-Lisp, while the other might be a fully-valid org-mode buffer. This allows a literate programming technique even without specific support for this form of programming in the language. Taken to the extreme, it is even possible for the buffers to contain completely different strings; I have not found a good practical use for the extreme yet, but lentic now supports a rot-13 transformation which demonstrates its capabilities.

Lentic can also be used to create persistant views of the same text. For example, lentic could be used to maintain a view of the imports of a java file, or the namespace form in clojure, or the preamble of a latex document. Unlike a second window, this view persists even if it is not visible. Alternatively, one view could use very small text, and the other could contain larger text, allowing rapid navigation.

Lentic 0.8 contains a number of new features since the 0.7 [release] [1]. The biggest change is that it is possible to produce any number of lentic buffers, rather than just two as previously. This means that its multi-modal and persistant view capabilities can be used at the same time.

Lentic is now available on MELPA, MELPA stable and github.


  1. P. Lord, "Lentic 0.7", An Exercise in Irrelevance, 2015.
-1:-- lentic 0.8 (Post Phillip Lord)--L0--C0--April 28, 2015 06:03 AM

Emacs Life: Problems and Errors · senny/emacs-eclim Wiki

Problems and Errors · senny/emacs-eclim Wiki: "You can display a list of the current errors and warnings in the current project by calling eclim-problems. In the problems list you can switch between displaying only errors or errors/warnings by pressing “e” and “w” respectively, or jump to the source code for the current problem by pressing RET. Press “g” to refresh the display."

'via Blog this'
-1:-- Problems and Errors · senny/emacs-eclim Wiki (Post Steven Ness ( 28, 2015 03:22 AM