Pragmatic Emacs: Reformatting Tabular Data

Sometimes in my research I need to extract tabular data from a pdf paper. I can copy and paste the table into an Emacs buffer but the data is generally not formatted in a usable way. Luckily Emacs has a wealth of tools to reformat this sort of data.

Here is an animated gif illustrating some tools I use to do this (of course there are lots of other ways to do the same thing).


In the animation I use the following tools

  • C-x h to select the whole buffer.
  • C-c | to run org-table-create-or-convert-from-region to convert the region to an org table. This doesn’t get me all the way to where I want to be, but I find it helpful to see the data clearly.
  • M-S-<left> to delete some unwanted columns
  • mc/mark-next-like-this from multiple cursors to give me a cursor on each line (I bind this to M-.)
  • M-f to move forward by word
  • shrink-whitespace to remove whitespace (I bind this to M-SPACE)
  • C-c - to run org-table-insert-hline to add a nice horizontal line to my table

This restructures the data in the way I need, and I can now use org-table-export to export to other useful formats.

-1:-- Reformatting Tabular Data (Post Ben Maughan)--L0--C0--May 22, 2017 07:42 PM

Irreal: A DSL for Elfeed

As Irreal regulars know, I’m now reading my feeds with Elfeed. It put one more frequent task under Emacs and generally makes organizing and dealing with the feeds easier. Because of my Elfeed adoption, I’ve been paying more attention to news and articles about it.

One such article from last December is by Chris Wellons himself and describes one of the internal features of Elfeed. When an XML record describing an article or post arrives, the first thing that happens is that it is converted to an equivalent sexpr structure. From there, the necessary data is extracted and put into Elfeed’s database. That should be easy and, in fact, one could imagine making the sexpr process the data itself as I described back in 2011.

Sadly, things are more complicated. The RSS and Atom standards are a bit like the situation with Markdown: no one interprets them in exactly the same way. Data may be in one of several places and may or may not be encoded as you expect. Therefore, parsing the sexpr is non trivial. In order to avoid writing the same complicated code over and over, Wellons implemented a DSL that takes a description of the desired field within the structure. The DSL, which is sort of like a pattern matching description, is then interpreted by a pair of functions.

This can take some time. The example Wellons gives is searching for the data of an Atom entry. This may exist in 5 possible fields so he has a DSL entry to check each of those fields. As he added more features, the number of DSL entries that had to be checked grew and the process became slower.

This is where Wellons showed his genius. Rather than interpret the DSLs<img src=—" class="wp-smiley" style="height: 1em; max-height: 1em;" />parsing them over and over<img src=—" class="wp-smiley" style="height: 1em; max-height: 1em;" />he compiled them into Elisp byte code. That’s easy to do in Elisp because it’s precisely what macros are for. Basically, he replaced the two functions that interpreted the DSL with two macros that generated the same Elisp he would have written by hand. That change sped up the DSL processing by an order of magnitude and the entire XML processing by 25%. The processing is now dominated by the conversion of XML to the sexpr, which uses a function from the XML library and is out of his control.

See Wellons’ post for the details. If you like Lisp or ever wondered what the big deal about macros is, you’ll enjoy reading it. If you were wondering about macros, it will open your eyes. Really, go read it.

-1:-- A DSL for Elfeed (Post jcs)--L0--C0--May 22, 2017 06:19 PM

Chen Bin (redguardtoo): Use wgrep and evil to replace text efficiently

In my previous article Emacs is easy if you read code, I proved ivy and wgrep is easy if you read code. You can even create your own plugin based on their APIs. For example, I define my-grep and my-grep-occur in init-ivy.el in order to search/replace text in project root directory.

My wgrep-mode enabled buffer is in evil-mode. I prefer pressing vi key binding dd to remove lines in that buffer to tell wgrep skip them.

It turns out we need M-x C-c C-p or M-x wgrep-toggle-readonly-area before removing lines.

I'm too lazy to remember extra commands. So here is the workaround:

;; Press `dd' to delete lines in `wgrep-mode' in evil directly
(defadvice evil-delete (around evil-delete-hack activate)
  ;; make buffer writable
  (if (and (boundp 'wgrep-prepared) wgrep-prepared)
  ;; make buffer read-only
  (if (and (boundp 'wgrep-prepared) wgrep-prepared)
-1:-- Use wgrep and evil to replace text efficiently (Post Chen Bin)--L0--C0--May 22, 2017 01:14 PM

sachachua: 2017-05-22 Emacs news

Links from, /r/orgmode, /r/spacemacs, Hacker News,, YouTube, the changes to the Emacs NEWS file, and emacs-devel.

Past Emacs News round-ups

-1:-- 2017-05-22 Emacs news (Post Sacha Chua)--L0--C0--May 22, 2017 07:49 AM

Irreal: 35 Coding Habits to Avoid

Christian Maioli Mackeprang has an interesting post on 35 habits that can negatively affect your coding. I’ll let you read them and make your own judgments as to their value for your situation but I’d like to comment on two of them.

The first, #20: Not bothering with mastering your tools and IDE, really resonated with me. Lately, I’ve spent more time than ever working on learning to be a more effective Emacs user. This is true even though I’ve been using Emacs for almost a decade. Every Emacs user knows there’s always something new to learn so you can probably never master the editor but the journey can make you appreciably more effective. I’ve found that the work I’ve put in has paid dividends that have more than made up for my effort.

The second bad habit, #22: Romanticizing your developer toolkit, is in some sense the converse of #20. I didn’t like it as much as #20 because it’s pretty clear that I am exceedingly attached to Emacs to the point that it is the epicenter of my workflow. Still, Emacs can’t do everything and, of course, I use other tools. Some of those tools make more sense for my workflow than using the corresponding capability of Emacs. I keep my calendar in the Apple Calendar app rather than any of the (perfectly adequate) calendar solutions in Emacs because it’s easier to sync it across all my devices and share some events with others.

For editing, though, I will always turn to Emacs. You can make the case that other editors or IDEs might be better for certain situations<img src=—" class="wp-smiley" style="height: 1em; max-height: 1em;" />the Racket IDE for Scheme or Eclipse for Java, for example<img src=—" class="wp-smiley" style="height: 1em; max-height: 1em;" />but the pain of switching to another editor and abandoning, even temporarily, my editor muscle memory means that Emacs is a better solution for me. Your mileage may vary, of course.

-1:-- 35 Coding Habits to Avoid (Post jcs)--L0--C0--May 21, 2017 02:30 PM

Alex Schroeder: Emacs Wiki Down

Sadly it seems that Emacs Wiki is down. Currently Nic Ferrier is paying for the servers, so I sent him an email but haven’t heard back. So I’m thinking of resurrecting the site on my own servers.

I still remember the fiasco that started with 2014-12-18 Emacs Wiki Migration.

These are my notes. Perhaps they’re useful in case I have to restore another backup, or they might be useful to you if you want to fork Emacs Wiki.

First, restore the backups from Chile provided by zeus. Thanks, man!

rsync -az .
rsync -az .

I want to run the script from my Mojolicious Toadfarm. I added the following lines to my Toadfarm setup:


mount "$farm/" => {
  "Host" => qr{^emacswiki\.org:8080$},
  mount_point => '/wiki',

And this is the Mojolicious CGI Plugin wrapper,

#! /usr/bin/env perl

use Mojolicious::Lite;

plugin CGI => {
  support_semicolon_in_query_string => 1,

plugin CGI => {
  route => '/',
  script => '/home/alex/farm/', # not necessary
  errlog => '/home/alex/farm/emacswiki.log',
  run => \&OddMuse::DoWikiRequest,
  before => sub {
    no warnings;
    $OddMuse::RunCGI = 0;
    $OddMuse::DataDir = '/home/alex/emacswiki';
    require '/home/alex/farm/';


In order for this to work, I need an Apache site. I created /etc/apache2/sites-available/ with the following:

<VirtualHost *:80>
    Redirect permanent /
<VirtualHost *:443>
    DocumentRoot /home/alex/
    <Directory /home/alex/>
        Options ExecCGI Includes Indexes MultiViews SymLinksIfOwnerMatch
	# legacy CGI scripts like                                                                                                                
        AddHandler cgi-script .pl
        AllowOverride All
        Require all granted

    SSLEngine on
    SSLCertificateFile      /etc/
    SSLCertificateKeyFile   /etc/
    SSLCertificateChainFile /etc/
    SSLVerifyClient None

    ProxyPass /wiki   
    ProxyPass /mojo   


I remove all the *.pl files except for in the directory.

Reloded the farm using ./farm reload. Check the log file for Mounting emacswiki with conditions.

Activating the site using sudo a2ensite

Check the config using sudo apachectl configtest. Oops! This is an obvious erorr, of course: SSLCertificateFile: file '/etc/' does not exist or is empty.

I need to get the SSL certificates, too.

I added to /etc/ and ran /etc/ -c but that doesn’t work. I guess it doesn’t work because the name still points to the old server. I guess for the moment I’ll try to do without HTTPS.

So this is what I’ll be using instead for the site:

<VirtualHost *:80>
    DocumentRoot /home/alex/
    <Directory /home/alex/>
        Options ExecCGI Includes Indexes MultiViews SymLinksIfOwnerMatch
        # legacy CGI scripts like                                                                                                                
        AddHandler cgi-script .pl
        AllowOverride All
        Require all granted

    ProxyPass /emacs  
    ProxyPass /wiki   
    ProxyPass /mojo   


Now sudo apachectl configtest says Syntax OK.

Reloaded Apache using sudo service apache2 reload.

Added a line to my /etc/hosts file:

Testing w3m and w3m seems to work!

Better make the wiki read-only: touch ~/emacswiki/noedit.

Following links doesn’t work. w3m tells me: Can't load The problem is that Apache has as a server alias, but the Toadfarm only listens for

Change that:


mount "$farm/" => {
  "Host" => qr{^(www\.)?emacswiki\.org:8080$},
  mount_point => '/wiki',

And reload: ./farm reload.

That didn’t work. Hah, of course not. I need to add to my /etc/hosts, of course!

Now it works.

OK, next problem: Why does w3m give me the directory listing? Surely I’m missing my .htaccess file. Is it not being read? The /var/log/apache2/error.log file has not suspicious. Well, it does mention something about the directory but I just deleted it. Are the permissions wrong? I did a chmod g-w .htaccess just to be sure and now it says:

-rw-r--r-- 1 alex alex 1955 May 29  2016

This looks correct to me.

In there, it says DirectoryIndex Ah. That might be a problem because I removed that script. Changing that to DirectoryIndex emacs did the job!

OK, so anybody who has access to their own /etc/hosts file can now access a read-only copy of the site.

Here’s what I have planned:

  1. change the DNS entry ✓
  2. see how the site explodes 🔥🔥🔥
  3. add HTTPS

When I tried to add a News page, I noticed that I was unable to get the wiki back into writeable mode. I had to remove the noedit file I had created earlier using rm ~/emacswiki/noedit.

Then, when I tried to save, the wiki complained about some page that looked like spam not being readable and I figured that the page index must have been out of sync so I simply removed it using rm ~/emacswiki/pageidx.

And finally I recreated the lock using touch ~/emacswiki/noedit.

OK, now I’m waiting for the DNS change to spread and watching my Munin graphs.

Also, all the people with HTTPS bookmarks will get errors like the following: Bad cert ident from : accept? (y/n). That’s because is currently longer listening on port 443 and the default site is Oh well! In a few hours I’m hoping that Let’s Encrypt will allow me to regenerate certificates for Emacs Wiki and then we’ll move to HTTPS.

Hours later, I checked again and HTTP access was working. So I ran sudo /etc/ -c to get the certificates and this time it worked. I reverted the changes to the site config file /etc/apache2/sites-available/ and we’re no using this:

<VirtualHost *:80>
    Redirect permanent /
<VirtualHost *:443>
    DocumentRoot /home/alex/
    <Directory /home/alex/>
        Options ExecCGI Includes Indexes MultiViews SymLinksIfOwnerMatch
	# legacy CGI scripts like
	AddHandler cgi-script .pl
        AllowOverride All
	Require all granted

    SSLEngine on
    SSLCertificateFile      /etc/
    SSLCertificateKeyFile   /etc/
    SSLCertificateChainFile /etc/
    SSLVerifyClient None

    ProxyPass /emacs  
    ProxyPass /wiki   
    ProxyPass /mojo   


Notice that both /emacs and /wiki will work. Is this a bad idea? sudo apachectl configtest says the changes are good and so I ran sudo service apache2 reload. Everthing seems to be working!

What about load? It’s definitely going up! :(

OK, time to read up on mod_cache. I think I want something like the following:

# Turn on caching
CacheSocache shmcb
CacheSocacheMaxSize 102400
<Location "/emacs">
    CacheEnable socache

Well, before diving into this, I think we should just monitor how load develops over the next few hours.

A few hours later it would seem to me that there are no open issues so there is no need for extra caching.

And that also means, I can try and make the website editable again.

Let’s see, what else do we need to check?

  1. does git work?
  2. what about cronjobs?

As for git, this is simple. I created the page 2017-05-18 and I expect to see it on the emacsmirror master branch. Sadly, it isn’t there. Why not?

Let’s take a look:

alex@sibirocobombus:~/emacswiki/git$ git log
commit a08f867084896e9892d148f76a54976166cd75db
Author: Alex Schroeder <>
Date:   Thu May 18 13:56:01 2017 +0200

    Backup site!

Oops! Apparently, the git repository wasn’t checked out. It makes sense, actually. But now I need to fix this. git remote -v shows no remotes. Let’s add it, and fetch data. This works because my public key is already part of the emacswirror org on GitHub.

git remote add origin
git fetch
git branch -m master temp
git checkout master

At this point it should tell you Branch master set up to track remote branch master from origin.

git cherry-pick a08f86
git push
git branch -D temp

OK. Time to test it! In order to be able to save, I now have to change the site URL in the config file back to HTTPS. It should read

my $root = "";

I made the page edit and that seems to do the trick. git log in the git directory lists the new edit.

This brings me to the next part: cron jobs. Somebody has to push those commits, right?

  1. I added emacswiki and to the shell script that uses rsync to store daily backups in Chile. Thanks again, zeus!
  2. I found an old emacs-git-update in my bin directory and added an appropriate entry to my crontab using crontab -e.
  3. I found an old maintain-emacswiki in my bin directory, fixed it, and also added it to crontab. I definitely need to check the maintenance page a few times over the next few days.
  4. I found an old update-ell in my bin directory and decided to check the XML file referenced. The timestamp says Wed 24 Dec 2014 11:36:00 GMT so I think it’s safe to say that this won’t be required anymore.
  5. I did not find a copy of the emacs-elisp-area script. I checked the code in my config file and now I remember: this job used to call the expensive Elisp Area URLs and save the result to disk, and then URL rewriting made sure that the Elisp Area URLs called from the web would read those files instead. I just tried those links (”Alphabetical list of all Elisp libraries on this Wiki” with and without context, and “Chronological list of all Elisp libraries on this Wiki”) and it seems to work just fine. It takes a few seconds, but nothing terrible. I’ll say that this won’t be required anymore.
  6. I found an old copy of emacs-rss in my bin directory. That one precomputes some resource intensive RSS feeds. I should definitely get those back into working condition. When I run it, the four files are generated, and they’re the four RSS feed advertised in the HTML of the wiki, so that’s great.

And that’s all the jobs I found in an old crontab file!

Current status, one day later:

The only suspicious thing is the spike around 2:30 in the morning. But the explanation might be simple enough, looking at my crontab:

#m   h  dom mon dow   command
 02  5  *   *   *     /home/alex/bin/maintain-campaignwiki
 47 4,16 *  *   *     /home/alex/bin/backup
 28  4  *   *   *     /home/alex/bin/subscriptions
 14  3  *   *   *     /home/alex/bin/emacs-git-update
 32  2  *   *   *     /home/alex/bin/maintain-emacswiki

At 2:32, the maintenance job runs. The curl output is available in the maintenance directory:

--2017-05-19 02:32:01--
Resolving (
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘/home/alex/’

     0K .......... .......... .......... .......... .......... 43.1K
    50K .......... .......... .......... .......... .......... 11.3K
   100K .......... .......... .......... .......... .......... 3.66K
   150K .......... .......... .......... .......... .......... 21.3K
   200K .......... .......... .......... .......... .......... 35.7K
   250K .......... .......... .......... .......... .......... 7.86K
   300K .......... .......... .......... .......... .......... 12.4K
   350K .......... .......... .......... .......... .......... 22.8K
   400K .......... .......... .......... .......... .......... 19.3K
   450K .......... .......... .......... .......... .......... 3.64K
   500K .......... .......... .......... .......... .......... 5.91K
   550K .......... .......... .......... .......... .......... 14.6K
   600K .......... .......... .......... .......... .......... 10.3K
   650K .......... .......... .......... .......... .......... 15.4K
   700K .......... .......... .......... .......... .......... 20.5K
   750K .......... .......... .......... .......... .......... 15.3K
   800K .......... .......... .......... .......... .......... 15.3K
   850K .......... .......... .......... .......... .......... 15.4K
   900K .......... .......... .......... ......                18.5K=86s

2017-05-19 02:34:03 (10.9 KB/s) - ‘/home/alex/’ saved [959329]

I’m guessing that these two minutes are causing the spike.

When I did some testing with the CSS, I ran into problems. If you choose a different theme via the CSS page, it gets stored in a cookie. Cookies are specific to a site, and so cookies set on,, and are separate from each other. This will not do. I’ve now changed the Apache config file to create the appropriate redirections. At the same time, I wanted to clean up the vs. situation.

<VirtualHost *:80>
    Redirect permanent /
<VirtualHost *:443>
    Redirect permanent /
    SSLEngine on
    SSLCertificateFile      /etc/
    SSLCertificateKeyFile   /etc/
    SSLCertificateChainFile /etc/
    SSLVerifyClient None
<VirtualHost *:443>
    DocumentRoot /home/alex/
    <Directory /home/alex/>
        Options ExecCGI Includes Indexes MultiViews SymLinksIfOwnerMatch
        # legacy CGI scripts like                                                               
        AddHandler cgi-script .pl
        AllowOverride All
        Require all granted
    SSLEngine on
    SSLCertificateFile      /etc/
    SSLCertificateKeyFile   /etc/
    SSLCertificateChainFile /etc/
    SSLVerifyClient None
    Redirect permanent /wiki
    ProxyPass /emacs  
    ProxyPass /mojo   

I wonder whether it’s important to prevent outside access to I see no problem?


-1:-- Emacs Wiki Down (Post)--L0--C0--May 18, 2017 11:24 AM

Marcin Borkowski: Some emacs-devel humor

As you may have noticed, I’ve blogging here regularly once per week (on average) for a couple of years now. Since tomorrow I’m going for a short vacation without my laptop, instead of posting the next article around Sunday, I do it now. And since I’ve been extremely busy lately, I only have a short, light-hearted thing to say. Here you have a short quotation from the emacs-devel mailing list (anonymized to protect the innocent;-)).
-1:-- Some emacs-devel humor (Post)--L0--C0--May 17, 2017 07:11 PM

Matthias Pfeifer: Emacs memory consumption

The Emacs built-in command (garbage-collect) gives detailed information about the data structures that currently consume memory. It is propably not the most usefull information but I wanted to collect the data and plot it. I started with writing functions to access the list returned from (garbage-collect): (defsubst get-mem-conses (mi) (let ((data (nth 0 mi))) (/ […]
-1:-- Emacs memory consumption (Post Matthias)--L0--C0--May 17, 2017 09:57 AM

emacshorrors: The Dude Abides

While crafting another patch for cc-mode, I stumbled upon the following:

;; RMS says don't make these the default.
;; (April 2006): RMS has now approved these commands as defaults.
(unless (memq 'argumentative-bod-function c-emacs-features)
  (define-key c-mode-base-map "\e\C-a"    'c-beginning-of-defun)
  (define-key c-mode-base-map "\e\C-e"    'c-end-of-defun))

I’ve seen this kind of thing before, but it keeps puzzling me. Why would you:

  • Put a comment there about not making the following a default
  • Put a clarifying comment how these have been approved as defaults now
  • Keep both comments to trip up future readers

Any other project I’m involved in would immediately remove that commentary. Emacs is the only one where suggesting such drastic action would yield a lengthy bikeshedding discussion about the merits of pre-VCS traditions and how keeping them will toughen up whoever may join in the collaborative development effort. Yes, seriously.

-1:-- The Dude Abides (Post Vasilij Schneidermann)--L0--C0--May 15, 2017 07:59 PM

sachachua: 2017-05-15 Emacs news

Links from, /r/orgmode, /r/spacemacs, Hacker News,, YouTube, the changes to the Emacs NEWS file, and emacs-devel.

Past Emacs News round-ups

-1:-- 2017-05-15 Emacs news (Post Sacha Chua)--L0--C0--May 15, 2017 06:35 AM

Marcin Borkowski: Smerge mode

Some time ago, when fixing yet another merge conflict, I noticed something I didn’t know about: it turned out that Emacs enabled something called Smerge mode in the buffer with the conflict markers. I pressed C-h m and learned that it’s quite useful! You can easily leave one or the other version (at each conflict), concatenate both versions (effectively deleting the conflit markers), move to the previous or next conflict etc. I did not analyse all of its commands yet, but even this limited subset is very useful. Also, Smerge mode turned itself off after resolving (this way or another) the last remaining conflict. Very nice!
-1:-- Smerge mode (Post)--L0--C0--May 15, 2017 05:47 AM

Pragmatic Emacs: Save window layouts with ivy-view

The wonderful ivy library provides a command ivy-view which allows you to quickly bookmark the current arrangement of windows in your Emacs frame. The nice thing is that once you do this, the bookmarked arrangement then appears in your ivy-powered buffer switching list so changing back to the arrangement you had is as easy as switching buffers. This make a lightweight alternative to other mathods for managing window layouts.

To use this, just run ivy-push-view to store the current view, and optionally give it a name (a useful default we be offered). This will then be offered when you switch buffer using ivy-switch-buffer (which you are using automatically if you use ivy-mode). To make these ivy-views appear in your buffer list, you might need to set the option

(setq ivy-use-virtual-buffers t)

in you emacs config file.

Ivy author abo-abo gives an example on his blog post – take a look if this sounds useful.

-1:-- Save window layouts with ivy-view (Post Ben Maughan)--L0--C0--May 14, 2017 11:38 PM

Phil Hagelberg: in which actors simulate a protocol

I've been on a bit of a yak shave recently on Bussard, my spaceflight programming adventure game. The game relies pretty heavily on simulating various computer systems, from your own craft to space stations, portals, rovers, and other craft. It naturally needs to simulate communications between all these.

I started with a pretty simple method of having each connection spin up its own coroutine running its own sandboxed session. Space station sessions run smash, a vaguely bash-like shell in a faux-unix, while connecting to a portal triggers a small lisp script to check for clearance and gradually activate the gateway sequence. The main loop would allow each session's coroutine a slice of time for each update tick, but a badly-behaved script could make the frame rate suffer. (Coroutines, you will remember, are a form of cooperative multitasking; not only do they not allow more than one thing to literally be running at the same time, but handing control off must be done explicitly.) Also input and output was handled in a pretty ad-hoc method where Lua tables were used as channels to send strings to and from these session coroutines. But most problematic of all was the fact that there wasn't any uniformity or regularity in the implementations of the various sessions.

Bussard shell session

The next big feature I wanted to add was the ability to deploy rovers from your ship and SSH into them to control their movements or reprogram them. But I really didn't want to add a third half-baked session type; I needed all the different implementations to conform to a single interface. This required some rethinking.

The codebase is written primarily in Lua, but not just any Lua—it uses the LÖVE framework. While Lua's concurrency options are very limited, LÖVE offers true OS threads which run independently of each other. Now of course LÖVE can't magically change the semantics of Lua—these threads are technically in the same process but cannot communicate directly. All communication happens over channels (aka queues) which allow copies of data to be shared, but not actual state.

While these limitations could be annoying in some cases, they turn out to be a perfect fit for simulating communications between separate computer systems. Moving to threads allows for much more complex programs to run on stations, portals, rovers, etc without adversely affecting performance of the game.

Each world has a server thread with a pair of input/output channels that gets started when you enter that world's star system. Upon a successful login, a thread is created for that specific session, which also gets its own stdin channel. Input from the main thread's SSH client gets routed from the server thread to the stdin channel of each specific session. Each OS implementation can provide its own implementation of what a session thread looks like, but they all exchange stdin and stdout messages over channels. Interactive sessions will typically run a shell like smash or a repl, and their thread parks on stdin:demand(), waiting until the main thread has some input to send along.

This works great for regular input and output, but sometimes it's necessary for the OS thread to make state changes to tables in the main thread, such as the cargo script for buying and selling. Time to build an RPC mechanism! I created a whitelist table of all functions which should be exposed to code running in a session thread over RPC. Each of these is exposed as a shim function in the session's sandbox:

local add_rpc = function(sandbox, name)
   sandbox[name] = function(...)
      local chan = love.thread.newChannel()
      output:push({op="rpc", fn=name, args={...}, chan=chan})
      local response = chan:demand()
      if(response[1] == "_error") then
         table.remove(response, 1)
         return unpack(response)

When the shim function is called it sends an op="rpc" table with a new throwaway channel (used only for communicating the return value), and sends it back over the output channel. The main thread picks this up, looks up the function in the rpcs table, and sends a message back over the response channel with the return value. This same RPC mechanism works equally well for scripts on space stations as it does for the portal control script, and a similar variation (but going the other direction) allows the SSH client to implement tab completion by making an RPC call to get completion targets.

They're not perfect, but the mechanisms LÖVE offers for concurrency have been a great fit in this particular case.

-1:-- in which actors simulate a protocol (Post Phil Hagelberg)--L0--C0--May 14, 2017 09:01 PM

Emacs café: How to find all unused functions in JS buffers

The real power of Emacs lies in its extensibility. To be able to quickly hack some Elisp together to fix a specific problem right in your development environment is something quite unique to Emacs, and it makes it stand apart from other text editors.

I’m working on a fairly large JavaScript code base for which maintenance can sometimes be an issue.

Yesterday I wanted to quickly find all function definitions in a JavaScript file that were not referenced anymore in the project, so I decided to hack some Elisp to do that.

What do we already have?

Let’s see what building blocks are already available.

xref-js2 makes it easy to find all references to a specific function within a project, and js2-mode exposes an AST that can be visited.

All in all, what I want to achieve shouldn’t be too hard to implement!

First steps

I’m calling my small package js2-unused, so all functions and variables will have that prefix.

We’ll need some packages along the way, so let’s require them:

(require 'seq)
(require 'xref-js2)
(require 'subr-x)

The first step is to find all function definitions within the current buffer. JS2-mode has a function js2-visit-ast that makes it really easy to traverse the entire AST tree.

We can first define a variable that will hold all function definition names that we find:

(defvar js2-unused-definitions nil)

Now let’s traverse the AST and find all function definitions. We want to find:

  • all assignments that assign to a function;
  • all function declarations that are named (skipping anonymous functions).
(defun js2-unused--find-definitions ()
  ;; Reset the value before visiting the AST
  (setq js2-unused-definitions nil)
  (js2-visit-ast js2-mode-ast

(defun js2-unused-visitor (node end-p)
  "Add NODE's name to `js2-unused-definitions` if it is a function."
  (unless end-p
     ;; assignment to a function
     ((and (js2-assign-node-p node)
           (js2-function-node-p (js2-assign-node-right node)))
      (push (js2-node-string (js2-assign-node-left node)) js2-unused-definitions))
     ;; function declaration (skipping anonymous ones)
     ((js2-function-node-p node)
      (if-let ((name (js2-function-name node)))
          (push name js2-unused-definitions))))

Finding references using xref-js2

Now that we can find and store all function names in a list, let’s use xref-js2 to filter the ones that are never referenced. If we find unreferenced functions, we simply display a message listing them.

(defun js2-unused-functions ()
  ;; Make sure that JS2 has finished parsing the buffer
   (lambda ()
     ;; Walk the AST tree to find all function definitions
     ;; Use xref-js2 to filter the ones that are not referenced anywhere
     (let ((unused (seq-filter (lambda (name)
                                 (null (xref-js2--find-references
                                        (js2-unused--unqualified-name name))))
       ;; If there are unreferenced function, display a message
       (apply #'message (if unused
                            `("Unused functions in %s: %s "
                              ,(file-name-nondirectory buffer-file-name)
                              ,(mapconcat #'identity unused " "))
                          '("No unused function found")))))))
(defun js2-unused--unqualified-name (name)
  "Return the local name of NAME. => baz"
    (if (string-match "\\.\\([^.]+\\)$" name)
        (match-string 1 name)


That’s it! In ~30 lines we can now find unreferenced functions in any JS file. Sure, the code is not perfect, far from it, but it was hacked together in 10 minutes and gets the job done.

Quickly writing some lisp code to fix a specific problem is something I do very often. Most of the time, it’s code I throw away as soon as the task is completed, but from time to time it’s something generic enough to be reused later, in which case I save it in my emacs.d, or make a proper package out of it.

If you find this feature useful, you can grab it from my emacs.d.

-1:-- How to find all unused functions in JS buffers (Post Nicolas Petton)--L0--C0--May 12, 2017 12:43 PM

Matthias Pfeifer: Emacs init performance analysis

I recently wanted to have some more information about which of the packages I am using contributes more time to the total (emacs-init-time) I use to keep my emacs init code in a single file and I manually divide the file into sections of related code. A section is opened by entering a carefully prepared […]
-1:-- Emacs init performance analysis (Post Matthias)--L0--C0--May 10, 2017 09:30 AM

Timo Geusch: RTFM, or how to make unnecessary work for yourself editing inf-mongo

Turns out I made some unnecessary “work” for myself when I tried to add support for command history to inf-mongo. As Mickey over at Mastering Emacs points out in a blog post, comint mode already comes with M-n and M-p Read More

The post RTFM, or how to make unnecessary work for yourself editing inf-mongo appeared first on The Lone C++ Coder's Blog.

-1:-- RTFM, or how to make unnecessary work for yourself editing inf-mongo (Post Timo Geusch)--L0--C0--May 10, 2017 01:31 AM

Emacs café: Setting up Emacs for JavaScript (part #2)

This is the second part of my series of articles describing how to make Emacs a great JavaScript development environment. This time we’ll focus on getting good auto-completion with type inference.

If you haven’t read it yet, you should jump to the first post first to get things started.

Setting up Tern & company-mode for auto-completion

Tern is a great tool once setup correctly. It parses JavaScript files in a project and does type inference to provide meaningful completion (with type hints) and support for cross-references.

Unfortunately, cross-references with tern never reliably worked for me, that’s why I have always been using xref-js2 instead for that (see part #1).

For auto-completion, we’ll be using company-mode with tern. Let’s go ahead and install tern:

$ sudo npm install -g tern

Now let’s install the Emacs packages:

M-x package-install RET company-tern RET

The Emacs configuration is straight-forward, we simply enable company-mode with the tern backend for JavaScript buffers:

(require 'company-mode)
(require 'company-tern)

(add-to-list 'company-backends 'company-tern)
(add-hook 'js2-mode-hook (lambda ()
;; Disable completion keybindings, as we use xref-js2 instead
(define-key tern-mode-keymap (kbd "M-.") nil)
(define-key tern-mode-keymap (kbd "M-,") nil)

Now, depending on your JavaScript project, you might want to setup tern to work with your project structure. If completion doesn’t work out of the box using tern defaults you will have to set it up using a .tern-project placed in the root folder containing your JavaScript files.

Here’s an example setup for a project that uses requirejs and jQuery, ignoring files from the bower_components directory:

  "libs": [
  "loadEagerly": [
  "dontLoad": [
  "plugins": {
    "requirejs": {
      "baseURL": "./"

Once setup, tern offers superb completion. Together with company-mode, you get great context-based completion with type inference.


When completing a function, you can hit <F1> to get its documentation:

Ternjs documentation

Until next time

In the next articles I’ll cover linting with Flycheck, gulp and grunt integration into Emacs, and of course how to setup and use Indium.

-1:-- Setting up Emacs for JavaScript (part #2) (Post Nicolas Petton)--L0--C0--May 09, 2017 03:00 PM

Chen Bin (redguardtoo): Emacs is easy if you read code

If you regard a package as a collection of APIs and read its code, Emacs is easy to master.

For example, here is a useful tip on using counsel-ag and wgrep to edit multiple files I recently learned.

To understand this black magic, you only need know counsel-ag-occur from counsel.el (v0.9.1):

(defun counsel-ag-occur ()
  "Generate a custom occur buffer for `counsel-ag'."
  (unless (eq major-mode 'ivy-occur-grep-mode)
  (setq default-directory counsel--git-grep-dir)
  (let* ((regex (counsel-unquote-regex-parens
                 (setq ivy--old-re
                        (progn (string-match "\"\\(.*\\)\"" (buffer-name))
                               (match-string 1 (buffer-name)))))))
         (cands (split-string
                  (format counsel-ag-base-command (shell-quote-argument regex)))
    ;; Need precise number of header lines for `wgrep' to work.
    (insert (format "-*- mode:grep; default-directory: %S -*-\n\n\n"
    (insert (format "%d candidates:\n" (length cands)))
      (lambda (cand) (concat "./" cand))
(ivy-set-occur 'counsel-ag 'counsel-ag-occur)
(ivy-set-display-transformer 'counsel-ag 'counsel-git-grep-transformer)

Inside counsel-ag-occur:

  • The variable regex is the regular expression built from the filter string you input. Please note that regex is unquoted by counsel-unquote-regex-parens so it can be used in shell. If you use regex in Emacs Lisp, you don't need unquote it
  • The variable cands is the candidate lines created by running ag with regex as parameters in shell
  • Then a wgrep-friendly buffer is created

After spending 5 minutes to understand the internals, you can easily implement similar features.

Now let's develop our own black magic by enhancing the wgrep-friendly buffer.

My project uses Perforce as VCS. So I need check out files and make them writable before using wgrep.

Read code of wgrep.el (v2.1.10),

(defun wgrep-prepare-context ()
    (let ((start (wgrep-goto-first-found))
          (end (wgrep-goto-end-of-found)))
      (narrow-to-region start end)
      (goto-char (point-min))
      (funcall wgrep-results-parser))))

wgrep-results-parser is actually alias of wgrep-parse-command-results whose code is too much to paste here. You can M-x find-function wgrep-parse-command-results to read its code.

By combining wgrep-prepare-context and wgrep-parse-command-results I got my own access-files-in-wgrep-buffer:

(defun access-files-in-wgrep-buffer()
    (let* ((start (wgrep-goto-first-found))
           (end (wgrep-goto-end-of-found))
      (narrow-to-region start end)
      (goto-char (point-min))
      (unless (featurep 'wgrep) (require 'featurep))
      (while (not (eobp))
        (if (looking-at wgrep-line-file-regexp)
            (let* ((fn (match-string-no-properties 1)))
              (unless (string= fn fn-accessed)
                (setq fn-accessed fn)
                (message "File relative path=%s" fn))))
        (forward-line 1)))))

You can replace the line (message "File relative path=%s" fn) to (shell-command (format "any-shell-cli %s" fn)) to do anything on the files.

You can insert definition of access-files-in-wgrep-buffer into your .emacs and run M-x access-files-in-wgrep-buffer in wgrep buffer to have a test.

-1:-- Emacs is easy if you read code (Post Chen Bin)--L0--C0--May 09, 2017 01:38 PM

Timo Geusch: Extending inf-mongo to support scrolling through command history

I’m spending a lot of time in the MongoDB shell at the moment, so of course I went to see if someone had built an Emacs mode to support the MongoDB shell. Google very quickly pointed me at endofunky’s inf-mongo Read More

The post Extending inf-mongo to support scrolling through command history appeared first on The Lone C++ Coder's Blog.

-1:-- Extending inf-mongo to support scrolling through command history (Post Timo Geusch)--L0--C0--May 08, 2017 03:30 AM

Anselm Helbig: Hidden abstractions in the Diamond Kata

Hidden abstractions in the Diamond Kata

The other day my colleagues and I were doing the “Diamond Kata”. If you haven’t heard about code katas yet: it’s a coding exercise you’re supposed to do repeatedly in order to hone your skills. Each time, you might want to try different approaches, different programming languages or coding disciplines. I found the Diamond Kata interesting in its own right, let me tell you why.

The task

The Diamond Kata is giving you a seemingly simple task: write code that outputs text in a diamond shape. The edges consist of letters, starting with “A”. The last letter is given as an argument and also determines the size of the diamond. The diamond for A⇒B looks like this:


The diamond for A⇒D would look like this:

  B B
 C   C
D     D
 C   C
  B B

Looks pretty simple, right?

A solution

I’m going to present a solution in ruby. Let’s write some end-to-end tests first:

class TestDiamond < Minitest::Test
  def test_end_to_end
    assert_equal <<-EOS,"A").to_s

    assert_equal <<-EOS,"B").to_s

    assert_equal <<-EOS,"C").to_s
 B B 
C   C
 B B 

That was easy. So how are we going to solve this? Let’s start top down. We need to create the Diamond class and Diamond#to_s first. The string returned will be a concatenation of all the lines.

Diamond = do
  def to_s


  def lines
    [] # ???

There’s only ever to be the same letter on every line and lines with the same letter look exactly alike. So we just need to figure out the right sequence of letters and have a method for creating the corresponding line. Let’s work on getting the sequence of letters right.

class TestDiamond < Minitest::Test
  # [...]
  def test_letters
    assert_equal "A".chars,"A").letters
    assert_equal "ABA".chars,"B").letters
    assert_equal "ABCBA".chars,"C").letters

Diamond = do
  # [...]
  def lines

  def letters
    (FIRST...last).to_a + (FIRST..last).to_a.reverse

  def line_for(letter)
    "" # ???

Well, this will do the job. Now on to implementing #line_for which will construct the string for one line. The tip and the bottom of the diamond are special as they only print one letter. For everything else, there will be some outer padding, repeated left and right, and some inner padding. We will need to do some ugly arithmetic with letters.

class TestDiamond < Minitest::Test
  # [...]
  def test_line_for
    assert_equal "A\n","A").line_for("A")
    assert_equal " A \n","B").line_for("A")
    assert_equal "B B\n","B").line_for("B")
    assert_equal " B B \n","C").line_for("B")
    assert_equal "C   C\n","C").line_for("C")

Diamond = do
  # [...]
  def line_for(letter)
    outer_padding = " " * (width - (letter.ord - FIRST.ord))
    if letter == FIRST
      outer_padding + letter + outer_padding
      inner_padding = " " * ((letter.ord - FIRST.ord) * 2 - 1)
      outer_padding + letter + inner_padding + letter + outer_padding
    end + "\n"


  def width
    last.ord - FIRST.ord

This took a few tries to get right, but seems to work. The end-to-end tests are passing now, we’re done. I wasn’t happy with this solution, though. Why? I was checking my code against Kent Beck’s four rules of simple design. Let me repeat them here:

Simple code

  1. passes test, i.e. works
  2. communicates intent
  3. contains no duplication
  4. uses a minimum amount of classes and methods

My problem was with rule 2., that the code doesn’t reflect the nature of the problem. You would never figure what the code does without running it or looking at the tests. It took me a few days to figure out another way.

The hidden abstraction

The task of the kata is not to output a certain random string. The diamond is a geometrical shape, it’s symmetrical. In math, you would draw something like this in a two dimensional coordinate system. So if we had some kind of canvas that we can render down to a string we could express the problem in a much nicer way. Let’s try this out! Here’s a really simple (square) text canvas:

class TestCanvas < Minitest::Test
  def test_draw
    canvas =
    canvas[0, 0] = "X"
    canvas[1, 1] = "Y"
    canvas[0, 2] = "Z"
    canvas[2, 0] = "A"
    assert_equal <<-EOS, canvas.to_s

Canvas = do
  def to_s { |row| row + "\n" }.join

  def []=(x, y, value)
    rows.fetch(y)[x] = value


  def rows
    @rows ||= { " " * size }

On this canvas we need to paint every letter four times. We don’t need a special case for the tips any more: the tips will get painted more than once, at the same position. I introduce the radius of the diamond. We need to shift everything by this value in X and Y direction as our origin 0, 0 is in the top left corner. Here’s something that works:

Diamond = do
  FIRST = "A"

  def to_s


  def draw!
    ).each do |letter, x, y|
      canvas[ x + radius,  y + radius] = letter
      canvas[-x + radius,  y + radius] = letter
      canvas[ x + radius, -y + radius] = letter
      canvas[-x + radius, -y + radius] = letter

  def canvas
    @canvas ||=

  def radius
    @radius ||= last.ord - FIRST.ord

  def width
    radius * 2 + 1

I already like this solution much better.

  • We got rid of the special case.
  • The symmetry of the diamond shape is reflected by the drawing operations.
  • The Canvas class can be tested independently and is a component that we can easily reuse.

There’s also a thing I didn’t mention previously: we had to make Diamond#letters and Diamond#line_for public so that they could be tested. But they are really an implementation detail that no other code should depend upon. With the current implementation I’m quite happy with the feedback that the end-to-end test provides. Maybe some minor thing: the #draw! method still has some duplication, the drawing of letter looks repetitive as we always need to add the radius. According to the Four Rules of Simple Design, this is something to look out for. So let’s see if we can improve.

More abstractions

Operations in 2D space are a well known subject. Moving around by a fixed amount is called translation. Let’s implement this as a decorator:

class TestCanvas < Minitest::Test
  def test_translation
    canvas =, 1, 1)
    canvas[0, 0] = "A"
    assert_equal <<-EOS, canvas.to_s



class Translation < SimpleDelegator
  def initialize(canvas, offset_x, offset_y)
    @offset_x = offset_x
    @offset_y = offset_y

  def []=(x, y, value)
    __getobj__[x + offset_x, y + offset_y] = value


  attr_reader :offset_x, :offset_y

The decorator logic calls for a bit of boilerplate, but the end result is nice and simple. Let’s see how we can put this to good use. Two methods in Diamond will need to change, Diamond#draw! and Diamond#canvas.

Diamond = do
  # [...]


  def draw!
    ).each do |letter, x, y|
      canvas[ x,  y] = letter
      canvas[-x,  y] = letter
      canvas[ x, -y] = letter
      canvas[-x, -y] = letter

  def canvas
    @canvas ||=,
        radius, radius

So each drawing operation got simpler at the expense of a more complicated canvas setup. We still have four drawing operations going on, though. The obvious solution is to use a loop instead like this

Diamond = do
  # [...]
  def draw!
    ).each do |letter, x, y|
      [[x, y], [-x, y], [x, -y], [-x, -y]].each do |coords|
        canvas[*coords] = letter

This removes duplication but makes the #draw! method harder to read. Maybe we can solve this in a similar fashion? Let’s look at it from another angle: we’re drawing a symmetrical shape, this means that we are mirroring over the X and Y axes. So the thing we need is a reflection. This should be pretty simple to do.

class TestReflection < Minitest::Test
  def test_reflection
    canvas =
          1, 1
        -1, 1
    canvas[0, 0] = "A"
    canvas[1, 1] = "B"
    assert_equal <<-EOS, canvas.to_s


class Reflection  < SimpleDelegator
  def initialize(canvas, factor_x, factor_y)
    @factor_x = factor_x
    @factor_y = factor_y

  def []=(x, y, value)
    __getobj__[x, y] = value
    __getobj__[x * factor_x, y * factor_y] = value


  attr_reader :factor_x, :factor_y

This looks very similar to Translation, using multiplication instead of addition. The difference is that we’re still drawing in the original location. In order to be even more general, you could introduce a stack of canvases that get layered on top of each other during rendering. I chose not to go this route here. So what does this mean for our Diamond class?

Diamond = do
  # [...]
  def draw!
    ).each do |letter, x, y|
      canvas[x, y] = letter

  def canvas
    @canvas ||=

            radius, radius
          1, -1
        -1, 1

The setup of our canvas looks pretty complex now. Is this still simple design? Duplication is reduced, but we have more classes working together. Whether this is all worth it depends on the context. For me this is just an exercise, so I can do what I please. But what if this was happening in a business context? If the business is trying to create a terminal-based text-only drawing program for UNIX-nerds, there’s a high likelyhood that our investment in composable classes will pay off quickly. If on the other hand this task was only a one-off job to help in creating a new company logo, our efforts would have been wasteful and our first version would have been good enough.

A parable for hidden abstractions

In hindsight the value of introducing the Canvas abstraction is obvious. Why didn’t I see it earlier? I think this is due to my upbringing as a programmer. A programmer learns how to do useful stuff with a dumb machine. We’re aware of the limits of our programming environments and take pride in how we’re still getting useful stuff done. So it’s only logical that start to think like the machine, we break the output up into individual lines and start to solve the smaller problem of creating a single line. The problem is that this is disregarding the outside context and therefore obscures the nature of the task.

This reminded me of the history of astronomy. In antiquity, astronomers were able to calculate the motion of the planets pretty accurately even though they were using the geocentric model. The Ptolemaic system was pretty complex, it assumed that planets were moving in epicycles along deferents. Similar to our first implementation, it worked just fine. But as we know today, it’s more useful to put the sun in the center. Contemporary programmers struggle with finding a better point of view just as much as the astronomers of yore.

][1][1] Representation of the apparent motion of the Sun, Mercury, and Venus from the earth.

Abstractions and TDD

People have been saying that TDD leads to better design and I tend to agree. But TDD doesn’t write code, it doesn’t create abstractions and it doesn’t always make it obvious what next step to take. A nicely factored implementation is easy to test – it’s our job as programmers to conceive it.


-1:-- Hidden abstractions in the Diamond Kata (Post admin)--L0--C0--May 07, 2017 06:56 PM

emacspeak: Emacspeak 46.0 (HelpfulDog) Unleashed

Emacspeak 46.0—HelpfulDog—Unleashed!

For Immediate Release:

San Jose, Calif., (May 1, 2017)

Emacspeak 46.0 (HelpfulDog): Redefining Accessibility In The Age Of Smart Assistants
–Zero cost of Ownership makes priceless software Universally affordable!

Emacspeak Inc (NASDOG: ESPK) — — announces the
immediate world-wide availability of Emacspeak 46.0 (HelpfulDog) — a
powerful audio desktop for leveraging today's evolving data, social
and service-oriented Internet cloud.

1 Investors Note:

With several prominent tweeters expanding coverage of
#emacspeak, NASDOG: ESPK has now been consistently trading over
the social net at levels close to that once attained by DogCom
high-fliers—and as of May 2017 is trading at levels close to
that achieved by once better known stocks in the tech sector.

2 What Is It?

Emacspeak is a fully functional audio desktop that provides complete
eyes-free access to all major 32 and 64 bit operating environments. By
seamlessly blending live access to all aspects of the Internet such as
Web-surfing, blogging, social computing and electronic messaging into
the audio desktop, Emacspeak enables speech access to local and remote
information with a consistent and well-integrated user interface. A
rich suite of task-oriented tools provides efficient speech-enabled
access to the evolving service-oriented social Internet cloud.

3 Major Enhancements:

This version requires emacs-25.1 or later.

  1. Audio-formatted Mathematics using NodeJS. ⟋🕪
    1. DBus integration for handling DBus events. 🚌
    2. Outloud is Easier To Install On 64-Bit Systems. ʕ
    3. Managing Shell Buffers across multiple projects. 📽
    4. EWW loads EBook settings when opening EPub files. 🕮
    5. Bash Utils for power users. 🐚
    6. Speech-Enabled Elisp-Refs. 🤞
    7. Updated C/C++ Mode Support. ䷢
    8. Updated EShell Support. ︹
    9. Speach-Enabled Clojure. 𝍏
    10. Speech-Enabled Geiser For Scheme Interaction. ♨
    11. Speech-Enabled Cider. 🍎
    12. Speech-Enable Racket IDE. ƛ
    13. Parameterized auditory icons using SoX-Gen. 🔊
    14. IHeart Radio wizard. 📻
    15. Speech-Enabled Projectile. 🢫
    16. Spoken notifications are cached in a special buffer. ⏰
    17. Flycheck And Interactive Correction. 𐄂

      • And a lot more than wil fit this margin. … 🗞

4 Establishing Liberty, Equality And Freedom:

Never a toy system, Emacspeak is voluntarily bundled with all
major Linux distributions. Though designed to be modular,
distributors have freely chosen to bundle the fully integrated
system without any undue pressure—a documented success for
the integrated innovation embodied by Emacspeak. As the system
evolves, both upgrades and downgrades continue to be available at
the same zero-cost to all users. The integrity of the Emacspeak
codebase is ensured by the reliable and secure Linux platform
used to develop and distribute the software.

Extensive studies have shown that thanks to these features, users
consider Emacspeak to be absolutely priceless. Thanks to this
wide-spread user demand, the present version remains priceless
as ever—it is being made available at the same zero-cost as
previous releases.

At the same time, Emacspeak continues to innovate in the area of
eyes-free Assistance and social interaction and carries forward the
well-established Open Source tradition of introducing user interface
features that eventually show up in luser environments.

On this theme, when once challenged by a proponent of a crash-prone
but well-marketed mousetrap with the assertion "Emacs is a system from
the 70's", the creator of Emacspeak evinced surprise at the unusual
candor manifest in the assertion that it would take popular
idiot-proven interfaces until the year 2070 to catch up to where the
Emacspeak audio desktop is today. Industry experts welcomed this
refreshing breath of Courage Certainty and Clarity (CCC) at a time
when users are reeling from the Fear Uncertainty and Doubt (FUD)
unleashed by complex software systems backed by even more convoluted
press releases.

5 Independent Test Results:

Independent test results have proven that unlike some modern (and
not so modern) software, Emacspeak can be safely uninstalled without
adversely affecting the continued performance of the computer. These
same tests also revealed that once uninstalled, the user stopped
functioning altogether. Speaking with Aster Labrador, the creator of
Emacspeak once pointed out that these results re-emphasize the
user-centric design of Emacspeak; "It is the user –and not the
computer– that stops functioning when Emacspeak is uninstalled!".

5.1 Note from Aster,Bubbles and Tilden:

UnDoctored Videos Inc. is looking for volunteers to star in a
video demonstrating such complete user failure.

6 Obtaining Emacspeak:

Emacspeak can be downloaded from GitHub –see you can visit Emacspeak on the
WWW at You can subscribe to the emacspeak
mailing list — — by sending mail to the
list request address The Emacspeak
is a good source for news about recent enhancements and how to
use them.

The latest development snapshot of Emacspeak is always available via
Git from GitHub at
Emacspeak GitHub .

7 History:

  • Emacspeak 46.0 (HelpfulDog) heralds the coming of Smart Assistants.
  • Emacspeak 45.0 (IdealDog) is named in recognition of Emacs'
    excellent integration with various programming language
    environments — thanks to this, Emacspeak is the IDE of choice
    for eyes-free software engineering.
  • Emacspeak 44.0 continues the steady pace of innovation on the
    audio desktop.
  • Emacspeak 43.0 brings even more end-user efficiency by leveraging the
    ability to spatially place multiple audio streams to provide timely
    auditory feedback.
  • Emacspeak 42.0 while moving to GitHub from Google Code continues to
    innovate in the areas of auditory user interfaces and efficient,
    light-weight Internet access.
  • Emacspeak 41.0 continues to improve
    on the desire to provide not just equal, but superior access —
    technology when correctly implemented can significantly enhance the
    human ability.
  • Emacspeak 40.0 goes back to Web basics by enabling
    efficient access to large amounts of readable Web content.
  • Emacspeak 39.0 continues the Emacspeak tradition of increasing the breadth of
    user tasks that are covered without introducing unnecessary
  • Emacspeak 38.0 is the latest in a series of award-winning
    releases from Emacspeak Inc.
  • Emacspeak 37.0 continues the tradition of
    delivering robust software as reflected by its code-name.
  • Emacspeak 36.0 enhances the audio desktop with many new tools including full
    EPub support — hence the name EPubDog.
  • Emacspeak 35.0 is all about
    teaching a new dog old tricks — and is aptly code-named HeadDog in
    on of our new Press/Analyst contact. emacspeak-34.0 (AKA Bubbles)
    established a new beach-head with respect to rapid task completion in
    an eyes-free environment.
  • Emacspeak-33.0 AKA StarDog brings
    unparalleled cloud access to the audio desktop.
  • Emacspeak 32.0 AKA
    LuckyDog continues to innovate via open technologies for better
  • Emacspeak 31.0 AKA TweetDog — adds tweeting to the Emacspeak
  • Emacspeak 30.0 AKA SocialDog brings the Social Web to the
    audio desktop—you cant but be social if you speak!
  • Emacspeak 29.0—AKAAbleDog—is a testament to the resilliance and innovation
    embodied by Open Source software—it would not exist without the
    thriving Emacs community that continues to ensure that Emacs remains
    one of the premier user environments despite perhaps also being one of
    the oldest.
  • Emacspeak 28.0—AKA PuppyDog—exemplifies the rapid pace of
    development evinced by Open Source software.
  • Emacspeak 27.0—AKA
    FastDog—is the latest in a sequence of upgrades that make previous
    releases obsolete and downgrades unnecessary.
  • Emacspeak 26—AKA
    LeadDog—continues the tradition of introducing innovative access
    solutions that are unfettered by the constraints inherent in
    traditional adaptive technologies.
  • Emacspeak 25 —AKA ActiveDog
    —re-activates open, unfettered access to online
  • Emacspeak-Alive —AKA LiveDog —enlivens open, unfettered
    information access with a series of live updates that once again
    demonstrate the power and agility of open source software
  • Emacspeak 23.0 — AKA Retriever—went the extra mile in
    fetching full access.
  • Emacspeak 22.0 —AKA GuideDog —helps users
    navigate the Web more effectively than ever before.
  • Emacspeak 21.0
    —AKA PlayDog —continued the
    Emacspeak tradition of relying on enhanced
    productivity to liberate users.
  • Emacspeak-20.0 —AKA LeapDog —continues
    the long established GNU/Emacs tradition of integrated innovation to
    create a pleasurable computing environment for eyes-free
  • emacspeak-19.0 –AKA WorkDog– is designed to enhance
    user productivity at work and leisure.
  • Emacspeak-18.0 –code named
    GoodDog– continued the Emacspeak tradition of enhancing user
    productivity and thereby reducing total cost of
  • Emacspeak-17.0 –code named HappyDog– enhances user
    productivity by exploiting today's evolving WWW
  • Emacspeak-16.0 –code named CleverDog– the follow-up to
    SmartDog– continued the tradition of working better, faster,
  • Emacspeak-15.0 –code named SmartDog–followed up on TopDog
    as the next in a continuing series of award-winning audio desktop
    releases from Emacspeak Inc.
  • Emacspeak-14.0 –code named TopDog–was

the first release of this millennium.

  • Emacspeak-13.0 –codenamed
    YellowLab– was the closing release of the
    20th. century.
  • Emacspeak-12.0 –code named GoldenDog– began
    leveraging the evolving semantic WWW to provide task-oriented speech
    access to Webformation.
  • Emacspeak-11.0 –code named Aster– went the
    final step in making Linux a zero-cost Internet access solution for
    blind and visually impaired users.
  • Emacspeak-10.0 –(AKA
    Emacspeak-2000) code named WonderDog– continued the tradition of
    award-winning software releases designed to make eyes-free computing a
    productive and pleasurable experience.
  • Emacspeak-9.0 –(AKA
    Emacspeak 99) code named BlackLab– continued to innovate in the areas
    of speech interaction and interactive accessibility.
  • Emacspeak-8.0 –(AKA Emacspeak-98++) code named BlackDog– was a major upgrade to
    the speech output extension to Emacs.
  • Emacspeak-95 (code named Illinois) was released as OpenSource on
    the Internet in May 1995 as the first complete speech interface
    to UNIX workstations. The subsequent release, Emacspeak-96 (code
    named Egypt) made available in May 1996 provided significant
    enhancements to the interface. Emacspeak-97 (Tennessee) went
    further in providing a true audio desktop. Emacspeak-98
    integrated Internetworking into all aspects of the audio desktop
    to provide the first fully interactive speech-enabled WebTop.

8 About Emacspeak:

Originally based at Cornell (NY) — —home to Auditory User
Interfaces (AUI) on the WWW, Emacspeak is now maintained on GitHub The system is mirrored
world-wide by an international network of software archives and
bundled voluntarily with all major Linux distributions. On Monday,
April 12, 1999, Emacspeak became part of the Smithsonian's Permanent
Research Collection
on Information Technology at the Smithsonian's
National Museum of American History.

The Emacspeak mailing list is archived at Vassar –the home of the
Emacspeak mailing list– thanks to Greg Priest-Dorman, and provides a
valuable knowledge base for new users.

9 Press/Analyst Contact: Tilden Labrador

Going forward, Tilden acknowledges his exclusive monopoly on
setting the direction of the Emacspeak Audio Desktop, and
promises to exercise this freedom to innovate and her resulting
power responsibly (as before) in the interest of all dogs.

*About This Release:

Windows-Free (WF) is a favorite battle-cry of The League Against
Forced Fenestration (LAFF). –see for details on
the ill-effects of Forced Fenestration.

CopyWrite )C( Aster, Hubbell and Tilden Labrador. All Writes Reserved.
HeadDog (DM), LiveDog (DM), GoldenDog (DM), BlackDog (DM) etc., are Registered
Dogmarks of Aster, Hubbell and Tilden Labrador. All other dogs belong to
their respective owners.

-1:-- Emacspeak 46.0 (HelpfulDog) Unleashed (Post T. V. Raman ( 30, 2017 03:12 PM

Manuel Uberti: Daily Clojure workflow

It’s already been a month since I started my new job. All is going well and just as expected, and it’s been interesting to see how my carefully tuned Emacs configuration dealt with everyday Clojure programming.

Truth be told, I’ve never used Emacs consistently for work. Before Clojure I mainly did Java and Emacs support for Java is notably not as good as what it offers for other programming languages. Yes, I kept Emacs around for other stuff, but it would be a dull lie to tell I was proudly using Emacs all day in the office.

Anyway, Clojure is the new kid in town now so it’s Emacs all the way. The obvious first choice is CIDER and I genuinely don’t have enough words to say how wonderful it is. I couple it with clj-refactor and Smartparens to get the most out of my coding experience.

I especially love how CIDER enables me to switch easily between Clojure and ClojureScript, with two REPLs ready to go and documentation just under my fingertips. clj-refactor enriches exploratory development with hot reloading of dependencies and handy adjustment of missing requires.

Then there is Projectile. Even on small Clojure projects we are using to test available libraries there are plenty of files around. Mickey Petersen talks about tempo when it comes to using Emacs. Projectile guarantees you don’t lose your tempo while working on projects of different sizes.

What else? I don’t think Magit needs my over enthusiastic words. Ivy is proving to be the right tool at the right time, with Swiper ever so helpful. And now I am only waiting for the day we will need proper documents to bring out the almighty AUCTeX.

In the immortal words of Bozhidar Batsov:

Emacs is power.

Emacs is magic.

Emacs is fun.

Emacs is forever.

-1:-- Daily Clojure workflow (Post)--L0--C0--April 29, 2017 12:00 AM

emacspeak: Mail On The emacspeak Audio Desktop

Email On The Emacspeak Audio Desktop

1 Overview

This question comes up every few months on the emacspeak mailing
list. In general, see
Emacspeak Tools to quickly discover available speech-enabled
applications. This article outlines some of the available email setups
given the wide degree of variance in this space.

2 Background

How one puts together an email environment is a function of the

  1. How email is retrieved.
  2. How email is stored (if storing locally).
  3. How email is sent.

Here is an overview of what is available as viewed from the world of
Linux in general and Emacs in particular:

2.1 Email Retrieval

Email can be retrieved in a number of ways:

  • IMap via Emacs This is implemented well in GNUS, and poorly in
    Emacs/VM. Note that Emacs is single-threaded, and fetching large
    volumes of email via IMap is painful.
  • Batch Retrieval: IMap Tools like fetchmail, offlineimap and friends that live
    outside of Emacs can be used to batch-retrieve email in the
    background. The retrieved mail gets delivered locally as in the past.
  • Mail Filtering: UNIX procmail enables filtering of locally
    delivered email into separate folders for automatically organizing
    incoming email.

2.2 Sending Email

Sending email involves:

  1. Composing email — typically invoked via key-sequence C-x m
    (command: compose-mail). Emacs email packages implement
    specific versions of this command, e.g. vm-mail from package
    emacs/vm, message-mail from the message package etc.
  2. Sending email: This is specific to the email provider being used,
    e.g., GMail. In the past, UNIX machines could talk SMTP to
    the Mail Gateway, but this has mostly disappeared over time. For
    an example of how to configure Emacs to send email via GMail
    using SMTP , see file tvr/gm-smtp.el in the emacspeak repository.

2.3 Local Storage Format

  • UNIX Mail: An email folder is a file of messages. This
    format is used by clients like Emacs/VM, UNIX Mail etc.
  • Maildir: A mail folder is a directory, with
    individual email messages living in files of their
    own. Sample clients include MH-E (UNIX MH), MU4E.
  • RMail This is Emacs' original email format.

3 Putting It All Together

The next sections show my present email setup put together using the
building blocks described above.

  1. I use Linux on all my machines, and Android on my phone.
  2. I mostly limit email usage on my phone to get a quick overview of email that might require immediate attention — toward this end, I have a to-mobile GMail label that collects urgent messages.
  3. Linux is where I handle email in volume.
  4. I use my Inbox as

my ToDo list — which means that I leave little or no email in my
Inbox unless I'm on vacation and disconnected from email.

3.1 Desktop: Batch Retrieval And Emacs/VM

This is the email setup on my workstation. See next section for the
email setup while mobile.

  1. I batch-retrieve email using fetchmail.
  2. This email gets filtered through procmail and auto-filed into
    several folders based on a set of procmail rules. Typical rules
    include separating out various email lists into their respective folders.
  3. Note that this does not preclude using IMap via GNUS to read
    email while online.
  4. Email that is not filtered into separate folders e.g. email that
    is sent directly to me, email regarding projects that need
    immediate attention etc., land up in folder ~/mbox.
  5. So when I launch emacs/vm on my desktop, the above is all I
    need to deal with at any given moment.
  6. I typically read Auto-filed mailing lists using emacs/vm about once a day or
    less — I use package mspools to get a quick overview of the
    state of those mail folders.

3.2 Mobile AccessOn Laptop: GNUS And IMap

See gnus-prepare.el for my gnus configuration for accessing GMail
via imap. That configuration is setup to access multiple GMail accounts.

  1. I see each GMail label as a separate group in GNUS.
  2. I only sync high-priority labels — this works well even
    over slow WIFI connections while on the road. As an example, the
    afore-mentioned to-mobile GMail label is a high-priority group.
  3. Module gm-nnir defines a GNUS/GMail extension that enables
    one to search GMail using GMail's search operators — that is my
    prefered means of quickly finding email messages using
    search. This is very fast since the search happens server-side,
    and only email headers are retrieved when displaying the search
  4. Note that this solution is not laptop/mobile specific — I use
    this setup for searching GMail from my desktop as well.

3.3 Composing And Sending EMail

  1. I use compose-mail to compose email.
  2. I optionally activate orgtbl-mode and/or orgstruct-mode if
    editing structured content within the email body.
  3. I send email out using the setup in gm-smtp.el.

4 Conclusion

  1. Email in Linux/Emacs is composed of a set of
    independent building blocks — this gives maximal flexibility.
  2. That flexibility allows one to put together different email
    workflows depending on the connectivity environment in use.
-1:-- Mail On The emacspeak Audio Desktop (Post T. V. Raman ( 23, 2017 03:19 AM

Got Emacs?: Emacs 25.2 Released

The bug fix version of Emacs 25.2 is released.  More information can be seen in the official announcement
-1:-- Emacs 25.2 Released (Post sivaram ( 21, 2017 04:31 PM

Bryan Murdock: Avoiding Verilog's Non-determinism, Part 1

In my last post we looked at some example code that showed off Verilog's non-determinism. Here it is again (you can actually run it on multiple simulators here on EDA Playground):

module top;
   reg ready;
   integer result;

   initial begin
      ready <= 1;
      result <= 5;

   initial begin
      @(posedge ready);
      if(result == 5) begin
       $display("result was ready");
      else begin
       $display("result was not ready");
Just to review from last time, the problem is that sometimes the @(posedge ready) will trigger before result has the value 5 and sometimes it will trigger after result has the value 5. We have called this non-determinism but a more common term for it is, race condition. There is a race between the values of ready and result making it to that second process (the second initial block). If result is updated first (wins the race) then everything runs as the writer of the code intended. If ready is updated first (wins the race) then the result will not actually be ready when the writer of the code intended.
Now the question is, is there a way to write this code so that there is no race condition? Well, first of all I surveyed my body of work on simulation-only code and didn't find very many uses of non-blocking assignments like that. The common advice in the Verilog world is to use non-blocking assignments in clocked always blocks not in "procedural" code like this. If we change the above to use blocking instead of non-blocking assignments, does that fix the problem? Here's what the new first initial block looks like:
   initial begin
      ready = 1;
      result = 5;
You can try it on EDA Playground and see that it still behaves the same as it did before except for with GPL Cver. With non-blocking assignments you get "result was not ready" with Cver and now you get "result was ready." That doesn't give me a lot of warm fuzzy feelings though. In fact, looking at that code makes me feel worse. If I'm thinking procedurally it looks totally backwards to set ready to one before assigning the value to result. My instinct would be to write the first initial block like this:
   initial begin
      result = 5;
      ready = 1;
Is that better for avoiding race conditions? If I take the explanation for why race-conditions exist in Verilog from Jan Decaluwe's VHDL's Crown Jewel post at face value, I think it actually is. That post explains that right after the first assignment (signal value update, if we use Jan's wording) in the first initial block Verilog could decide to trigger the second process (the second initial block). That case causes problems in the original code because the first assignment is to ready and result doesn't yet have its updated value. With the assignments re-ordered as above even if the second initial block is activated after the first assignment it will not try to read the value of result. It will just block waiting for a posedge ready (which will happen next). Race condition: eliminated. Here is the full fixed code example on EDA Playground.
Strangely enough, I spent the day yesterday debugging and fixing a race condition in our production testbench code here at work. It was very different from this one, so don't get too confident after reading this single blog post. I was able to boil the problem from yesterday down into another small example and so my next post will show off that code and how I eliminated that particular race.
UPDATE: As promised another example of a race condition.
-1:-- Avoiding Verilog's Non-determinism, Part 1 (Post Bryan ( 19, 2017 05:51 PM

Bryan Murdock: Quick Thoughts on Creating Coding Standards


No team says, "write your code however the heck you want." Unless you are coding alone, it generally helps to have an agreed upon coding standard. Agreeing upon a coding standard, however, can be a painful process full of heated arguments and hurt feelings. This morning I thought it might be useful to first categorize coding standard items before starting the arguments. My hope is that once we categorize coding standard items we can use better decision criteria for each category of items and cut down on arguing. Below are the categories I came up with really quickly with descriptions, examples, and decision criteria for each category. Feedback is welcome in the comments.

Categories of Things in Coding Standards

Language Specific Pitfalls


  • not subjective, easy to recognize pattern
  • well recognized in the industry as dangerous
  • people have war stories and about these with associated scars to prove it


  • no multiple declarations on one line in C
  • Cliff Cummings rules for blocking vs. non-blocking assignments in Verilog
  • no willy nilly gotos in C
  • no omitting braces for one liner blocks (or begin-end in Verilog)
  • no compiler warnings allowed

How to resolve disputes on which of these should be in The Coding Standard?

Defer to engineers with best war stories. If nobody has a war story for one, you can probably omit it (or can you?).

General Readability/Maintainability

"Any fool can write code that a computer can understand. Good programmers write code that humans can understand." –Martin Fowler


  • things that help humans quickly read, understand, and safely modify code
  • usually not language specific
  • the path from these items to bugs is probably not as clear as with the above items, but a path does exist


  • no magic numbers
  • no single letter variable names
  • keep functions short
  • indicators in names (_t for typedef's, p for pointers, etc.)

How to resolve disputes on which of these should be in The Coding Standard?

If someone says, "this really helps me" then the team should suck it up and do it. This is essentially the "put the slowest hiker at the front of the group" principle.

Alternatively these can be discussed on a case by case basis during code reviews instead of being codified in The Coding Standard. Be prepared for more "lively" code reviews if you go this route.

Code Formatting

The biggest wars often erupt over these because they are so subjective. This doesn't have to be the case.


  • these probably aren't really preventing any bugs
  • most can easily be automatically corrected
  • are largely a matter of taste
  • only important for consistency (which is important!)


  • amount of indent
  • brace style
  • camelCase vs. underscore_names
  • 80 column rule
  • dare I even mention it? tabs vs. spaces

How to resolve disputes on which of these should be in The Coding Standard?

Don't spend a long time arguing about these. Because they are so subjective and not likely to cause or reduce bugs one way or the other, nobody should get bent out of shape if their preference is not chosen by the team. Give everyone two minutes to make their case for their favorite, have a vote, majority wins, end of discussion. Use an existing tool (astyle, autopep8, an emacs mode, whatever is available for the language) to help people follow these rules.
-1:-- Quick Thoughts on Creating Coding Standards (Post Bryan ( 18, 2017 04:49 PM

(or emacs: Ivy 0.9.0 is out


Ivy is a completion method that's similar to Ido, but with emphasis on simplicity and customizability.


The current release constitutes of 339 commits and almost a full year of progress since 0.8.0. Many issues ranging from #493 to #946 were fixed. The number of people who contributed code as grown to 63; thanks, everyone!

Details on changes has been a part of the repository since 0.6.0, you can get the details of the current and past changes:


Many improvements are incremental and don't require any extra code to enable. I'll go over a few selected features that require a bit of information to make a good use of them.

A better action choice interface

For all ivy completions, pressing M-o allows to execute one of the custom actions for the current command. Now you have an option to use hydra for selecting an action. Use this code to turn on the feature:

(require 'ivy-hydra)

One big advantage of the new interface is that you can peak at the action list with M-o without dismissing the candidate list. Press M-o again to go back to candidate selection without selecting an action.

Here's some code from my config that ensures that I always have some extra actions to choose from:

(defun ora-insert (x)
   (if (stringp x)
     (car x))))

(defun ora-kill-new (x)
   (if (stringp x)
     (car x))))

 '(("i" ora-insert "insert")
   ("w" ora-kill-new "copy")))


The new counsel-rg joins the group of grepping commands in counsel (counsel-ag, counsel-git-grep, counsel-grep, counsel-pt). It wraps around the newly popular and very fast ripgrep shell tool.

A nice improvement to the grepping commands is the ability to specify extra flags when you press C-u (universal-argument) before the command. See this gif for an example of excluding *.el from the files searched by ag.


  • Press M-o b to change the current directory to one of the virtual buffers' directories. You continue to select a file from that directory.

  • Press M-o r to find the current file as root.


You can now customize counsel-git-log-cmd. See #652 for using this to make counsel-git-log work on Windows.


  • counsel-info-lookup-symbol now substitutes the built in info-lookup-symbol.
  • Pressing C-r while in the minibuffer of eval-expression or shell-command now gives you completion of your previous history.


Use the new counsel-yank-pop-separator variable to make counsel-yank-pop look like this.


There was breaking change for alist type collections some months ago. Right now the action functions receive an item from the collection, instead of (cdr item) like before. If anything breaks, the easy fix is to add an extra cdr to the action function.

Unique index for alist completion was added. The uniqueness assumption is that the completion system is passed a list of unique strings, of which one (or more) are selected. Unlike plain string completion, alists may require violating the uniqueness assumption: there may be two elements with the same car but different cdr. Example: C function declaration and definition for tag completion. Until now, whenever two equal strings were sent to ivy-read, only the first one could be selected. Now, each alist car gets an integer index assigned to it as a text property 'idx. So it's possible to differentiate two alist items with the same key.

Action functions don't require using with-ivy-window anymore. This allows for a lot of simplification, e.g. use insert instead of (lambda (x) (with-ivy-window (insert x))).


You can now customize faces in ivy-switch-buffer by the mode of each buffer. Here's a snippet from my config:

(setq ivy-switch-buffer-faces-alist
      '((emacs-lisp-mode . swiper-match-face-1)
        (dired-mode . ivy-subdir)
        (org-mode . org-level-4)))

Looks neat, I think:



Customize swiper-include-line-number-in-search if you'd like to match line numbers while using swiper.

New Commands


Offers completion for bookmark-jump. Press M-o d to delete a bookmark and M-o e to edit it.

A custom option counsel-bookmark-avoid-dired, which is off by default, allows to continue completion for bookmarked directories. Turn it on with:

(setq counsel-bookmark-avoid-dired t)

and when you choose a bookmarked directory, the choice will be forwarded to counsel-find-file instead of opening a dired-mode buffer.

counsel-colors-emacs and counsel-colors-web

Completion for colors by name:

  • the default action inserts the color name.
  • M-o h inserts the color hex name.
  • M-o N copies the color name to the kill ring.
  • M-o H copies the color hex name to the kill ring.

The colors are displayed in the minibuffer, it looks really cool:


You also get 108 shades of grey to choose from, for some reason.


Completion for faces by name:



Shows the history of the Emacs commands executed and lets you select and eval one again. See #826 for a nice screenshot.


Picks up company's candidates and inserts the result into the buffer.

counsel-dired-jump and counsel-file-jump

Jump to a directory or a file in the current directory.

counsel-dpkg and counsel-rpm

Wrap around the popular system package managers.


Install or uninstall Emacs packages with completion.


Navigate the current buffer's mark ring.


Navigate the current buffer's tags.


Navigate the current buffer's outlines.


Completion for recentf.


Completion for find-library.


Completion for the last hydra's heads.


Completion for headlines of files in your org-agenda-files.


Again, thanks to all the contributors. Happy hacking!

-1:-- Ivy 0.9.0 is out (Post)--L0--C0--April 08, 2017 10:00 PM

Anselm Helbig: Boy Scouts and Yaks

The Boy Scout Rule

Over time, code bases accumulate cruft and become hard to maintain and small changes take considerable effort. Code does not become complex over night, it happens slowly, line by line, feature by feature. And it’s not carelessness that makes it happen: just the work of hard-working people, building functionality on top of existing code. Sometimes it’s that you don’t fully grasp how an existing class works, you’re happy that you were able to make it work and get on with your life. After all, time is precious and there’s always more to ship. But eventually someone else will revisit this code and it will take their precious time to understand what is going on before they can actually apply their changes.

If your code will need maintenance in the future it pays off to invest some time in cleaning it up. Often people’s reaction goes like this: “There’s so much to improve. This will be a lot of effort. Therefore we can’t do it now.” That is, because they envision considerable improvements in a wider part of the code base, the whole thing has to be delayed until the team get managerial approval to spend on this. In the mean time, development will continue to be slow. Is there a better way? The “Boy Scout Rule” has a different approach. This is how Robert “Uncle Bob” Martin describes it:

The Boy Scouts have a rule: “Always leave the campground cleaner than you found it.” If you find a mess on the ground, you clean it up regardless of who might have made the mess. You intentionally improve the environment for the next group of campers. Actually the original form of that rule, written by Robert Stephenson Smyth Baden-Powell, the father of scouting, was “Try and leave this world a little better than you found it.”

What if we followed a similar rule in our code: “Always check a module in cleaner than when you checked it out.” No matter who the original author was, what if we always made some effort, no matter how small, to improve the module. What would be the result?

I think if we all followed that simple rule, we’d see the end of the relentless deterioration of our software systems. Instead, our systems would gradually get better and better as they evolved. We’d also see teams caring for the system as a whole, rather than just individuals caring for their own small little part.

Contrast this with our initial reaction. What does this mean for where to apply improvements (locality), how much effort to spend (scope) and when to do it (schedule)?

  • Locality — As you’re touching the code right now, it is therefore likely that you or someone else will need to touch it again. A different approach would have been to use a static code analysis tool and work on the bits that get the worst grades – which might be parts of the code that do not need maintenance, so this effort would be wasteful.

  • Schedule — Making it mandatory to only ship improved code makes sure that improvements are not delayed indefinitely. But how can we make up enough time for this?

  • Scope — We can always find the time to make an improvement when the scope is limited and the Boy Scout Rule does not ask for a lot.

There’s another nice side effect of slightly improving code as you’re working on it: the areas that change most often also get the most attention. So improvement is done where it yields most benefit.

In contrast to the “Boy Scout” metaphor, a programmer’s job is to put more functionality into the code base, which can create a mess over time if you’re not careful. For an actual boy scout it is trivial to spot rubbish on the camp site, and the solution is obvious. For programmers it’s not so easy. Abstractions we’re relying on might no longer be appropriate, past choices need to be reevaluated. This burden also gives us some benefit: we can adhere to the “Your aren’t gonna need it” (YAGNI) and “Keep it simple, stupid!” (KISS) rules from Extreme Programming (XP). If you can be sure that the proper abstractions will be implemented when necessary, you don’t have to build them early on when you might not have the full picture. In other words: the Boy Scout Rule can be a means of implementing XP’s continuous refactoring.

Yak Shaving

So you’re trying to be a good boy scout and start working on cleaning up the code. Improvements are a good thing, but now the feature you want to ship is going to be delayed. So when should you stop improving your code? Hear the cautionary tale of the shaven yak:

Yak Shaving is the last step of a series of steps that occurs when you find something you need to do. “I want to wax the car today.”

“Oops, the hose is still broken from the winter. I’ll need to buy a new one at Home Depot.”

“But Home Depot is on the other side of the Tappan Zee bridge and getting there without my EZPass is miserable because of the tolls.”

“But, wait! I could borrow my neighbor’s EZPass…”

“Bob won’t lend me his EZPass until I return the mooshi pillow my son borrowed, though.”

“And we haven’t returned it because some of the stuffing fell out and we need to get some yak hair to restuff it.”

And the next thing you know, you’re at the zoo, shaving a yak, all so you can wax your car.

I guess we all know how it feels like to go down these rabbit holes. Yak shaving teaches us when enough is enough: we should stop what we’re doing if the connection to our initial task is no longer obvious. If the other tasks are important, you’ll get around to doing them. To stay within the yak shaving story: your neighbor will eventually get his pillow back. If that other piece of code needs to be maintained, it will get its chance of getting some improvement. Yak Shaving makes you look busy even when you’re stuck. You’re delaying a more important task and do less important work instead. This is when you need to look for a different approach, a simpler solution or a workaround instead.


It’s my personal belief that software is best delivered incrementally. This comes from my experience of aiming at delivering something perfect and in the end not delivering anything at all. In software design, there’s a name for the sequential process of planning and then executing big chunks of work: that’s the waterfall model. Here’s the thing: you have to build and test your code incrementally anyway. If you delay shipping it you don’t really know if it actually works. And as experience shows, there’s always some aspect you weren’t aware of.

The problems with big refactoring efforts are similar to the problems with waterfall-style projects: you can’t be sure you’re doing the right thing until you get feedback from your end users. In the case of a refactoring project this could either be breaking existing functionality for customers or delivering an updated API or data model that is not flexible enough for developers. Sometimes this isn’t because your plan was flawed – maybe you couldn’t account for new requirements that came up while you were working.

The literature on refactoring teaches us how to make structural changes to existing code without altering its behavior. If your changes are safe, you can stop and ship anytime. Sometimes it’s not easy, though, to envision an incremental path. This takes ingenuity, persistence and practice. You need be aware of the big picture, the overall context as well. Your understanding will grow as you’re taking step after step, guiding you in the right direction.


Metaphors and stories are important tools. They help us reflect on our own behavior and exchange ideas with our coworkers. They’re best when they’re fun. What’s your favorite metaphor?


Big shout out to Simon who first made me apply the Boy Scout Rule and showed me the value of small, incremental steps.

-1:-- Boy Scouts and Yaks (Post admin)--L0--C0--April 04, 2017 08:17 AM

Chris Wellons: My Journey with Touch Typing and Vim

Given the title, the publication date of this article is probably really confusing. This was deliberate.

Three weeks ago I made a conscious decision to improve my typing habits. You see, I had a dirty habit. Despite spending literally decades typing on a daily basis, I’ve been a weak typist. It wasn’t exactly finger pecking, nor did it require looking down at the keyboard as I typed, but rather a six-finger dance I developed organically over the years. My technique was optimized towards Emacs’ frequent use of CTRL and ALT combinations, avoiding most of the hand scrunching. It was fast enough to keep up with my thinking most of the time, but was ultimately limiting due to its poor accuracy. I was hitting the wrong keys far too often.

My prime motivation was to learn Vim — or, more specifically, to learn modal editing. Lots of people swear by it, including people whose opinions I hold in high regard. The modal editing community is without a doubt larger than the Emacs community, especially since, thanks to Viper and Evil, a subset of the Emacs community is also part of the modal editing community. There’s obviously something significantly valuable about it, and I wanted to understand what that was.

But I was a lousy typist who couldn’t hit the right keys often enough to make effective use of modal editing. I would need to learn touch typing first.

Touch typing

How would I learn? Well, the first search result for “online touch typing course” was Typing Club, so that’s what I went with. By the way, here’s my official review: “Good enough not to bother checking out the competition.” For a website it’s pretty much the ultimate compliment, but it’s not exactly the sort of thing you’d want to hear from your long-term partner.

My hard rule was that I would immediately abandon my old habits cold turkey. Poor typing is a bad habit just like smoking, minus the cancer and weakened sense of smell. It was vital that I unlearn all that old muscle memory. That included not just my six-finger dance, but also my NetHack muscle memory. NetHack uses “hjkl” for navigation just like Vim. The problem was that I’d spent a couple hundred hours in NetHack over the past decade with my index finger on “h”, not the proper home row location. It was disorienting to navigate around Vim initally, like riding a bicycle with inverted controls.

Based on reading other people’s accounts, I determined I’d need several days of introductory practice where I’d be utterly unproductive. I took a three-day weekend, starting my touch typing lessons on a Thursday evening. Boy, they weren’t kidding about it being slow going. It was a rough weekend. When checking in on my practice, my wife literally said she pitied me. Ouch.

By Monday I was at a level resembling a very slow touch typist. For the rest of the first week I followed all the lessons up through the number keys, never progressing past an exercise until I had exceeded the target speed with at least 90% accuracy. This was now enough to get me back on my feet for programming at a glacial, frustrating pace. Programming involves a lot more numbers and symbols than other kinds of typing, making that top row so important. For a programmer, it would probably be better for these lessons to be earlier in the series.

For that first week I mostly used Emacs while I was finding my feet (or finding my fingers?). That’s when I experienced first hand what all these non-Emacs people — people who I, until recently, considered to be unenlightened simpletons — had been complaining about all these years: Pressing CTRL and ALT key combinations from the home row is a real pain in in the ass! These complaints were suddenly making sense. I was already seeing the value of modal editing before I even started really learning Vim. It made me look forward to it even more.

During the second week of touch typing I went though Derek Wyatt’s Vim videos and learned my way around the :help system enough to bootstrap my Vim education. I then read through the user manual, practicing along the way. I’ll definitely have to pass through it a few more times to pick up all sorts of things that didn’t stick. This is one way that Emacs and Vim are a lot alike.

Update: Practical Vim: Edit Text at the Speed of Thought was recommended in the comments, and it’s certainly a better place to start than the Vim user manual. Unlike the manual, it’s opinionated and focuses on good habits, which is exactly what a newbie needs.

One of my rules when learning Vim was to resist the urge to remap keys. I’ve done it a lot with Emacs: “Hmm, that’s not very convenient. I’ll change it.” It means my Emacs configuration is fairly non-standard, and using Emacs without my configuration is like using an unfamiliar editor. This is both good and bad. The good is that I’ve truly changed Emacs to be my editor, suited just for me. The bad is that I’m extremely dependent on my configuration. What if there was a text editing emergency?

With Vim as a sort of secondary editor, I want to be able to fire it up unconfigured and continue to be nearly as productive. A pile of remappings would prohibit this. In my mind this is like a form of emergency preparedness. Other people stock up food and supplies. I’m preparing myself to sit at a strange machine without any of my configuration so that I can start the rewrite of the software lost in the disaster, so long as that machine has vi, cc, and make. If I can’t code in C, then what’s the point in surviving anyway?

The other reason is that I’m just learning. A different mapping might seem more appropriate, but what do I know at this point? It’s better to follow the beaten path at first, lest I form a bunch of bad habits again. Trust in the knowledge of the ancients.

Future directions

I am absolutely sticking with modal editing for the long term. I’m really enjoying it so far. At three weeks of touch typing and two weeks of modal editing, I’m around 80% caught back up with my old productivity speed, but this time I’ve got a lot more potential for improvement.

For now, Vim will continue taking over more and more of my text editing work. My last three articles were written in Vim. It’s really important to keep building proficiency. I still rely on Emacs for email and for syndication feeds, and that’s not changing any time soon. I also really like Magit as a Git interface. Plus I don’t want to abandon years of accumulated knowledge and leave the users of my various Emacs packages out to dry. Ultimately I believe will end up using Evil, to get what seems to be the best of both worlds: modal editing and Emacs’ rich extensibility.

-1:-- My Journey with Touch Typing and Vim (Post)--L0--C0--April 01, 2017 04:02 AM

Flickr tag 'emacs': GAMS-mode-Emacs-no1

shiro.takeda posted a photo:


-1:-- GAMS-mode-Emacs-no1 (Post shiro.takeda ( 30, 2017 05:37 PM