Jonas Bernoulli: Ghub 2.0 and Glab 2.0 released

I am excited to announce the release of Ghub v2.0 and Glab v2.0.
-1:-- Ghub 2.0 and Glab 2.0 released (Post)--L0--C0--March 20, 2018 07:10 PM

Irreal: RC1 for Emacs 26.1

Eli Zarestskii has announced that barring any difficulties coming to
light, the Emacs 26.1 RC1 should be released next week. Zarestskii
expects that that will be followed in short order by the release of
Emacs 26.1 itself.

Relative newcomers to Emacs will see this as simply the usual yearly
release that has become standard for software in our industry. Those
of us who have been around a bit longer know that it wasn’t always
this way for Emacs. Major releases could be years apart and came when
they came. Thanks to recent maintainers—especially John and Eli—we now
have a more predictable (and frequent) release schedule.

It’s good to remember that this is a lot of hard work and it’s all
done by volunteers. Guys like John, Eli, and the other contributors
have families and real jobs that need attending to but still make time
to keep the Emacs project going and flourishing. We all owe them a lot
of thanks.

-1:-- RC1 for Emacs 26.1 (Post jcs)--L0--C0--March 20, 2018 03:52 PM

sachachua: 2018-03-19 Emacs news

Links from, /r/orgmode, /r/spacemacs, Hacker News,, YouTube, the changes to the Emacs NEWS file, and emacs-devel.

-1:-- 2018-03-19 Emacs news (Post Sacha Chua)--L0--C0--March 19, 2018 04:32 PM

Irreal: Lightweight Literate Programming with Hydras

Grant Rettke has an interesting take on abo-abo’s Hydra package. His
idea is that you can use hydras for a sort of lightweight literate
. Most Irreal readers are already aware of hydras’ utility
as a way of gathering related functionality into a menu-driven set of
keystrokes with a common prefix. Hydra functions can be configured to
easily repeat with a single keystroke rather than the longer complete
sequence needed on the first call.

Rettke uses hydras for all that, of course, but he also uses them when
learning a new package or set of related functions. Check out his post
to see how he uses a hydra as a sort of notebook to record what
functionality is available and what keystroke to use to call it. It
is, he says, executable documentation.

-1:-- Lightweight Literate Programming with Hydras (Post jcs)--L0--C0--March 19, 2018 03:46 PM

Raimon Grau: fixing indentation of lua (busted) in emacs. A nasty hack

In general, indentation is not an issue in emacs.

But there are some exceptions.  For example, in Lua, one of the de facto testing libraries is busted, which tries to mimick rspec in many aspects.

A typical busted test looks like this:
Lua mode tends to indent code in a very lisp-y way (which I personally like) by aligning parameters to the same function using as a "starting point" the offset of the first parameter.  In this case, there's also an opened function that gets indented in addition to that base column.
This is unacceptable in most codebases, so I had to write some hack for those particular cases.

As the indentation code for lua-mode is quite complex and this is an exception to the general rule, I wrote this very ugly hack, that seems to solve my problem at hand.
As with all defadvice uses, it looks like a hack because it is a big ugly hack, but at least it lets me deal with it, and move on with my tasks without manually reindenting stuff.

Another +1 for emacs hackability :)
-1:-- fixing indentation of lua (busted) in emacs. A nasty hack (Post Raimon Grau ( 19, 2018 11:04 AM

Marcin Borkowski: My Org-mode hydra

I mentioned a lot of times that I am a big fan of Org-mode clocking feature. Even if I clock some things only for myself, I find it useful to learn how much time I actually need for some task. This is often surprising – for example, I think most people are completely unaware of how much you can accomplish by spending five to ten minutes every day on something (notice the “every day” part!). Of course, I want my clocking to be as smooth as possible.
-1:-- My Org-mode hydra (Post)--L0--C0--March 18, 2018 05:01 AM

sachachua: Making an 8-page 7″x4.25″ captioned photo book with Org Mode and LaTeX

Here’s another technique that makes a simple photo book. I wanted to
make an 8-page book that could be printed 4 pages to a 8.5″x14″ sheet
(duplex, flip along the short edge), with a final page size of

Sample with my own photos:



  • ImageMagick
  • Texlive (probably)
  • latex-beamer
  • Org Mode and Emacs


We can define the labels and their captions in a named table like this:

Let’s Go for a Walk  
Caption for photo 1 placeholder.png
Caption for photo 2 placeholder.png
Caption for photo 3 placeholder.png
Caption for photo 4 placeholder.png
Caption for photo 5 placeholder.png

Note that the first page is row #1 this time, instead of starting with
the last page.

Then we generate the LaTeX code with some Emacs Lisp, like so:

#+begin_src emacs-lisp :var pages=story :results value latex :exports results
(mapconcat (lambda (row) (format "\\graphicframe{%s}{%s}" (cadr row) (org-export-string-as (car row) 'latex t))) pages "\n")

I put that in a subtree for easier exporting with C-c C-e C-s l b (org-export-dispatch, subtree, LaTeX, Beamer).



  • Set up Org Mode export to Beamer
    (eval-after-load "ox-latex"
      ;; update the list of LaTeX classes and associated header (encoding, etc.)
      ;; and structure
      '(add-to-list 'org-latex-classes
                      ,(concat "\\documentclass[presentation]{beamer}\n"
                      ("\\section{%s}" . "\\section*{%s}")
                      ("\\subsection{%s}" . "\\subsection*{%s}")
                      ("\\subsubsection{%s}" . "\\subsubsection*{%s}"))))
  • Set up the header.tex

    This file gets included in the LaTeX file for the children’s book.
    Tweak it to change the appearance. In this example, I use black serif
    text text on the left side of a picture, both occupying roughly half
    of the page. I also experimented with using a different font this time, which you might need to install (for me, I did apt-get install texlive-fonts-extra).

    \setbeamercolor{normal text}{fg=black,bg=white}
    %% \setbeamertemplate{frametitle}
    %% {
    %%   \begin{center}
    %%   \noindent
    %%   \insertframetitle
    %%   \end{center}
    %% }
    \newcommand{\graphicframe}[2] {
       %% \if #1\empty 
       %% \usebackgroundtemplate{}
       %% \fi
  • Create the PDF
    pdflatex index.tex
  • Create one PNG per page
    mkdir pages
    convert -density 300 index.pdf -quality 100 pages/page%d.png
  • Create the 4-up imposition

    The diagram at was helpful.

    montage \( page4.png -rotate 180 \) \( page3.png -rotate 180 \) page7.png page0.png -tile 2x2 -mode Concatenate front.png
    montage \( page2.png -rotate 180 \) \( page5.png -rotate 180 \) page1.png page6.png -tile 2x2 -mode Concatenate back.png
    convert front.png back.png -density 300 ../print.pdf

Other notes

Placeholder image from – public domain.

-1:-- Making an 8-page 7″x4.25″ captioned photo book with Org Mode and LaTeX (Post Sacha Chua)--L0--C0--March 17, 2018 02:12 AM

Marcin Borkowski: A tip on yanking

I have a few longer posts in the works, but for today I want to share a simple trick. We Emacs users all know and love the kill ring, and many of us know about M-y (yank-pop) and even C-u ... C-y (i.e., a numeric argment to yank). For those who don’t know: C-u 1 C-y is equivalent to plain C-y, but C-u 2 C-y (or just C-2 C-y) inserts the previous killed text (much like C-y M-y), and also marks it as the current one. With higher arguments, it inserts earlier kills.
-1:-- A tip on yanking (Post)--L0--C0--March 10, 2018 05:14 AM

(or emacs: Using exclusion patterns when grepping


I like Git. A lot. After years of use it has really grown on me. It's (mostly) fast, (often) reliable, and (always) distributed. For me, all properties are important, but being able to do git init to start a new project in seconds is the best feature.

When it comes to working with Git day-to-day, a nice GUI can really make a difference. In Emacs world, of course it's Magit. Outside of Emacs (brr), git-cola looks to be the most promising one. If you're aware of something better, please share - I'm keeping a list of suggestions for my non-Emacs using colleagues.

Ivy integration for Git

The main two commands in ivy that I use for Git are:

  • counsel-git: select a file tracked by Git
  • counsel-rg: grep for a line in all files tracked by Git, using ripgrep as the backend.

There are many alternatives to counsel-rg that use a different backend: counsel-git-grep, counsel-ag, counsel-ack, counsel-pt. But counsel-rg is the fastest, especially when I have to deal with Git repositories that are 2Gb in size (short explanation: it's a Perforce repo with a bunch of binaries, because why not; and I'm using git-p4 to interact with it).

Using .ignore with ripgrep

Adding an .ignore file to the root of your project can really speed up your searches. In my sample project, I went from 10k files to less than 500 files.

Example content:


As you can see, both file patterns and directories are supported. One other nifty thing that I discovered only recently is that you can use ripgrep as the backed for counsel-git in addition to counsel-rg. Which means the same .ignore file is used for both commands. Here's the setting:

(setq counsel-git-cmd "rg --files")

And here's my setting for counsel-rg:

(setq counsel-rg-base-command
      "rg -i -M 120 --no-heading --line-number --color never %s .")

The main difference in comparison to the default counsel-rg-base-command is -M 120 which means: truncate all lines that are longer than 120 characters. This is really helpful when Emacs is accepting input from ripgrep: a megabyte long line of minified JS is not only useless since you can't see it whole, but it will also likely hang Emacs for a while.


I hope you found these bits of info useful. Happy hacking!

-1:-- Using exclusion patterns when grepping (Post)--L0--C0--March 04, 2018 11:00 PM

Tom Tromey: Emacs JIT Calling Convention

I’ve been working a bit more on my Emacs JIT, in particular on improving function calling.  This has been a fun project so I thought I’d talk about it a bit.


Under the hood, the Emacs Lisp implementation has a few different ways to call functions.  Calls to or from Lisp are dispatched depending on what is being called:

  • For an interpreted function, the arguments are bound and then the interpreter is called;
  • For a byte-compiled function using dynamic binding, the arguments are bound and then the bytecode interpreter is called;
  • For a byte-compiled function using lexical binding, an array of arguments is passed to the bytecode interpreter;
  • For a function implemented in C (called a “subr” internally), up to 8 arguments are supported directly — as in, C functions of the form f(arg,arg,...); for more than that, an array of arguments is passed and the function itself must decide which slot means what.  That is, there are exactly 10 forms of subr (actually there are 11 but the one missing from this description is used for special forms, which we don’t need to think about here).

Oh, let’s just show the definition so you can read for yourself:

union {
Lisp_Object (*a0) (void);
Lisp_Object (*a1) (Lisp_Object);
Lisp_Object (*a2) (Lisp_Object, Lisp_Object);
Lisp_Object (*a3) (Lisp_Object, Lisp_Object, Lisp_Object);
Lisp_Object (*a4) (Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object);
Lisp_Object (*a5) (Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object);
Lisp_Object (*a6) (Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object);
Lisp_Object (*a7) (Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object);
Lisp_Object (*a8) (Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object);
Lisp_Object (*aUNEVALLED) (Lisp_Object args);
Lisp_Object (*aMANY) (ptrdiff_t, Lisp_Object *);
} function;

Initial Approach

Initially the JIT worked like a lexically-bound bytecode function: an array of arguments was passed to the JIT-compiled function.  The JIT compiler emitted a bunch of code to decode the arguments.

For Lisp functions taking a fixed number of arguments, this wasn’t too bad — just moving values from fixed slots in the array to fixed values in the IR.

Handling optional arguments was a bit uglier, involving a series of checks and branches, so that un-bound arguments could correctly be set to nil.  These were done something like:

if nargs < 1 goto nope1
arg0 = array[0]
if nargs < 2 goto nope2
arg1 = array[1]
goto first_bytecode
nope1: arg0 = nil
nope2: arg1 = nil
first_bytecode: ...

&rest arguments were even a bit worse, requiring a call to create a list.  (This, I think, can’t be avoided without a much smarter compiler, one that would notice when reifying the list could be avoided.)

Note that calling also has to use the fully generic approach: we make a temporary array of arguments, then call a C function (Ffuncall) that does the dispatching to the callee.  This is also a source of inefficiency.


Recently, I changed the JIT from this approach to use the equivalent of the subr calling convention.  Now, any function with 8 or fewer (non-&rest) arguments is simply an ordinary function of N arguments, and we let the already-existing C code deal with optional arguments.

Although this often makes the generated assembly simpler, it won’t actually perform any better — the same work is still being done, just somewhere else.  However, this approach does use a bit less memory (most JIT-compiled functions are shorter); and it opens the door to an even bigger improvement.

The Future

What I’m implementing now is an approach to removing most of the overhead from JIT-compiled function calls.

Now, ideally what I’d like is to have every call site work “like C”: move the arguments to exactly where the callee expects them to be, and then call.  However, while looking at this I found some problems that make it tricky:

  • We still need to be able to call Lisp functions from C, so we’re limited to, at best, subr-style calling conventions;
  • While &rest arguments are straightforward (in our simple compiler, somebody just has to make the list); &optional arguments don’t have a good C-like analog.  The callee could push extra arguments, but…
  • In Lisp, a function can be redefined at any time, and it is fine to change the function’s signature.

Consider this example:

(defun callee (x &optional y) (list x y))
(defun caller (callee 23))
(defun callee (x) (list x))

Now, if we compiled caller with a direct call, it would turn out like (callee 23 nil).  But then, after the redefinition, we’d have to recompile caller.  Note this can go the other way as well — we could redefine callee to have more optional arguments, or even more fixed arguments (meaning that the call should now throw an exception).

Recompiling isn’t such a big deal, right?  The compiler is set up very naively: it just compiles every function that is invoked, and in this mode “recompilation” is equivalent to “just abandon the compiled code”.

Except… what do you do if caller is being run when callee is redefined?  Whoops!

Actually, of course, this is a known issue in JIT compilation, and one possible solution is “on-stack replacement” (“OSR”) — recompiling a function while it is running.

This to me seemed like a lot of bookkeeping, though: keeping a list of which functions to compile when some function was redefined, and figuring out a decent way to implement OSR.

The Plan

Instead I came up a with a simpler approach, involving — you guessed it — indirection.

On the callee side, I am going to keep the subr calling convention that is in place today.  This isn’t ideal in all cases, but it is reasonable for a lot of code.  Instead, all the changes will take place at spots where the JIT emits a call.

I am planning to have three kinds of function calls in the JIT:

  1. Indirect.  If we see some code where we can’t determine the callee, we’ll emit a call via Ffuncall like we do today.
  2. Fully direct.  There are some functions that are implemented in C, and that I think are unreasonable to redefine.  For these, we’ll just call the C function directly.  Another fully-direct case is where the code dispatches to a byte-code vector coming from the function’s constant pool — here, there’s no possibility to redefine the function, so we can simply always call the JIT-compiled form.
  3. Semi-direct.  This will be the convention used when JIT-compiled code calls via a symbol.

The core idea of a semi-direct call is to have multiple possible implementations of a function:

  • One “true” implementation.  If the function has 8 or fewer arguments (of any kind), it will simply have that many arguments.  The JIT will simply pretend that an optional argument is fixed.  If it has more than 8 arguments, following the subr convention it will just accept an array of arguments.
  • If the function has optional or rest arguments, there will be trampoline implementations with fewer arguments, that simply supply the required number of additional arguments and then call the true implementation.
  • Remember how there are exactly 10 relevant kinds of subr?  Any forms not covered by the above can simply throw an exception.

A vector of function pointers will be attached to each symbol, and so the JIT-compiled code can simply load the function pointer from the appropriate slot (a single load — the nice thing about a JIT is we can simply hard-code the correct address).

Then, when a function is redefined, we simply define any of the trampolines that are required as well.  We won’t even need to define all of them — only the ones that some actually-existing call site has needed.

Of course, no project like this is complete without a rathole, which is why instead of doing this I’m actually working on writing a compiler pre-pass so that the compiler itself can have the appropriate information about the callee at the point of call.  This sub-project turns out to feel a lot like writing a Java bytecode verifier…

Further Future

Currently the JIT is only used for lexically-bound bytecode functions.  That’s a reasonable restriction, I think — so one thing we should do is make sure that more of the Emacs core is using lexical binding.  Currently, only about 1/3 of the Lisp files in Emacs enable this feature; but many more easily could.

Once my current project is done, the JIT will have a decent calling convention by default.  Since we’ll have information about callees at points of call, I think it will be a good time to look into inlining.  This will require tackling recompilation (and perhaps OSR) and having some sort of tiered optimization approach.  There is still a lot for me to learn here — when does it make sense to inline?  And what metrics should I use to decide when some code is hot enough to optimize?  So, good times head even once the current project is done; and BTW if you have a reading list for any of this I would love to hear about it.

Once this is done, well, I have more ideas for even deeper JIT improvements.  Those will have to wait for another post.

-1:-- Emacs JIT Calling Convention (Post tom)--L0--C0--March 03, 2018 09:53 PM

Phil Hagelberg: 186

I've enjoyed writing Lua code for my side projects for the most part. It has a few quirks, but many of them can be solved by a little linting. For a non-homoiconic language there is little in the syntax to object to, and the aggressively minimalist semantics more than make up for the few shortcomings.

Still, I always wondered what it would be like to use a language that compiled to Lua and didn't have problems like the statement vs expression distinction, lack of arity checks, or defaulting to globals. Over the past year or so I looked at several such languages1, but nothing ever stuck. Then a few weeks ago I found Fennel (at the time called "fnl"), and it really resonated with me.

The Microlanguage

snowy sidewalk

The thing that sets Fennel apart is that it is strictly about adding new syntax to Lua and keeping Lua's semantics. This allows it to operate as a compiler which introduces no runtime overhead. Code in Fennel translates trivially to its Lua equivalent:

(let [x (+ 89 5.2)
      f (fn [abc] (print (* 2 abc)))]
  (f x))

... becomes ...

  local x = (89 + 5.2000000000000002)
  local function _0_(abc)
      return print((2 * abc))
  local f = _0_
  return f(x)

There are a few quirks around introducing temporary variable names in cases where it's unnecessary like the above, but these are only readability concerns and do not affect the performance of the code, since Lua is smart enough to collapse it down. The temporary locals are introduced in order to ensure that every form in Fennel has a value; there are no statements, only expressions. This fixes a common problem in Lua where you can't use an if to calculate the value of an expression, because it's implemented as a statement, so you have to construct complex and/or chains to simulate if expressions. Plus it's just simpler and more consistent to omit statements completely from the language semantics.

The one exception to the no-overhead rule is Fennel's lambda form. Fennel's fn keyword compiles straight to a no-nonsense Lua function with all that implies. But Lua's function has one feature that is quite error-prone: it doesn't check to ensure that it was called with the correct number of arguments. This leads to nil values easily getting propagated all thru the call stack by mistake. Fennel's solution to this is lambda, which includes checks to ensure that this doesn't happen. This function will signal an error when it's called with zero or one argument, but the ?w and ?h arguments are optional:

(lambda [x y ?w ?h]
  (make-thing {:x x :y y
               :width (or ?w 10) :height (or ?h 10)}))

The other main difference between Fennel and Lua is that Fennel takes more care to distinguish between sequential tables and key/value tables. Of course on the Lua runtime there is no difference; only one kind of table exists, and whether it is sequential or not is just a matter of how it's constructed or used. Lua uses {} notation for all tables, but Fennel allows you to construct sequential tables (array-like tables which have consecutive integers as keys) using [] instead. Lua overloads the for keyword to iterate over a numeric range as well as to work with generic iterators like pairs for values in tables. Fennel uses each for the latter, which makes the difference clearer at a glance.

The Compiler

To be clear, these are very small improvements over Lua. Normally I wouldn't consider switching to a new language over such things. But Fennel is unique in its simplicity and lack of overhead, so it doesn't cost much to bring it in. When I found out about Fennel, it was an experimental compiler that had been written over the course of one week in 2016 and then forgotten. But I was impressed with how useful it was after only that week of development. I could see at a glance that in around a thousand lines of code it had a functional compiler that output fairly readable Lua code and fixed the problem of statements.

So I dived into the codebase and started adding a few conveniences, starting with a test case and some static analysis. When Fennel's creator Calvin Rose saw what I was doing, he gave me feedback and started to pick back up on development too. As I got more comfortable with the code I started adding features, like each, when, and comments. Then I started putting it thru the paces by porting some of my existing Lua programs over2. This went much more smoothly than I anticipated. I did find a few compiler bugs, but they were either easy to fix myself or fixed quickly by Calvin Rose once I pointed them out. Once I had a few programs under my belt I wrote up an introductory tutorial to the language that you should read if you want to give it a try.

Tumwater falls, with raging water

But what about the lines?

But one thing really bothered me when writing Fennel programs. When you needed to debug, the Lua output was fairly readable, but you quickly ran into the curse of all source→source compilers: line numbers didn't add up. Some runtimes allow you to provide source maps which change the numbering on stack traces to match the original source code, but Lua runtimes don't offer this. If your compiler emits bytecode instead of source, you can set the line numbers for functions directly. But then you're tied to a single VM and have to sacrifice portability.

So what's a poor compiler to do? This being Fennel, the answer is "the simplest thing possible". In this case, the simplest thing is to track the line number when emitting Lua source, and only emit newlines when it's detected that the current output came from input with a line number greater than the one it's currently on.

For most compilers, this naive approach would quickly fall to pieces. Usually you can't rely on the output being in the same order as the input that generated it3. I honestly did not have high hopes for this when I started working on it. But because of Fennel's 1:1 simplicity and predictability, it actually works surprisingly well.

The future?

At this point I'm pretty happy with where the compiler is, so my own plans are mostly just to write more Fennel code. The upcoming Lisp Game Jam will be a perfect excuse to do just that. I have a few ideas for further compiler improvements, like associative destructuring (sequential destructuring already works great), pattern matching, or even making the compiler self-hosting, but there's nothing quite like getting out there and banging out some programs.

[1] Here's a list of the lisps I found that compile to Lua and my brief impression of them:

An experimental lisp by the creator of Moonscript. Looks neat, but it requires an alpha build of Moonscript 0.2.0 from 2012 to run.
Inspired by Hy, this seems to be an improvement over Hy, since the latter inherits some of Python's unfortunate design bugs around scoping. If you're a Hy user, this might be a nice way to trade library availability for speed and consistency, but since the compiler is written in Hy it means you need two runtimes, which complicates deployment.
This looked promising when it was first announced, but it was based on a very early version of ClojureScript that was still quirky and didn't have a repl, and was abandoned a few months after it was announced. It has the same problem of requiring a separate runtime for the compiler as Hua, except that runtime needs dramatically greater resources.
This actually looks pretty nice if you like Scheme. It's pretty immature, but probably wouldn't take that much work to get to a usable state. Personally I find Lua tables to be much more friendly to work with than Scheme's lists, so sticking strictly with Scheme's semantics seems like a step backwards, but I know some people like it.
I actually did use this in my game. But since I tried it the compiler has been more or less completely rewritten. The unique thing about the new l2l compiler is that it allows you to mix Lua code and lisp code in the same file. I found it rather difficult to follow code that does this, but it's an interesting idea. The readme for l2l includes some very apt Taoist quotations, which earns it extra points in my book.
I saved the best for last. Urn is a very impressive language with a smart compiler that has great error messages, pattern matching, and tree-shaking to strip out code that's not used. The main reason I decided not to use Urn is that it wants to do a lot of optimization and analysis up-front and sacrifices some interactivity and reloading features to achieve it. As one who prizes interactivity above all other considerations, I found it a poor fit. Urn wants to be its own language that just uses Lua as a runtime, and that's great, but right now I'm looking for something that just takes Lua and makes the syntax nicer.

[2] My first program was a 1-page Pong, which is kind of the "hello world" of games. Then I ported the user-level config from Polywell, my Emacs clone, over to Fennel. This has made Polywell seem a lot more Emacsy than it used to be.

[3] Of course, once macros are introduced to the picture you can write code where this guarantee no longer applies. Oddly enough I'm not particularly interested in complex macros beyond things like pattern matching, so I don't see it being a problem for code I write, but it's worth noting that there are complications.

-1:-- 186 (Post Phil Hagelberg)--L0--C0--March 02, 2018 11:35 PM

Sanel Zukan: Extending org-mode Easy Templates

I frequently use org-mode Easy Templates and I noticed I often miss shortcut for comment block (BEGIN_COMMENT/END_COMMENT) which is very useful from time to time.

Let's add that feature.

(add-to-list 'org-structure-template-alist
             '("C" "#+BEGIN_COMMENT\n?\n#+END_COMMENT" ""))

C is character used to trigger completion by entering <C and pressing TAB - Emacs will then replace it with this block:



? is place where cursor will be set after expation.

Third argument in list I'm adding (empty string) is for Muse-like tags, which I'm not using at all.

Curious fact is that we are not limited to single character to trigger expansion - we can use words too:

(add-to-list 'org-structure-template-alist
             '("comment" "#+BEGIN_COMMENT\n?\n#+END_COMMENT" ""))

Entering <comment and pressing TAB will again expand it to above comment block.

-1:-- Extending org-mode Easy Templates (Post)--L0--C0--March 01, 2018 11:00 PM

Alex Schroeder: Emacs Packages

My Emacs setup is on GitHub but I might have to finally get rid of my special package handling code and move to something else instead.

How about using straight together with use-package?

One of these days!


-1:-- Emacs Packages (Post)--L0--C0--February 27, 2018 08:37 AM

Raimon Grau: SF emacs meetup

This week I attended an emacs meetup in SF just by pure chance. I was browsing /r/emacs and there was a comment about the meetup. I found about it just the day before the event.

Being the first time I'm in San Francisco, and the fact that I'll be around just for a couple of weeks (for my new job), it makes it even more surprising that  I was able to go.

We were about 15 people, most from the bay area, and I think myself I was the only foreigner. The meetup topic was "a few of our favourite emacs modes", which unlocked the possibility to talk about helm-dash (not that it's my favourite mode, but is the one I wrote (and I also find it quite helpful)).

So I volunteered and gave a really quick intro to helm-dash.

Others talked about evil, magit, pynt, multiple-cursors (that was nuts!), git-timemachine, use-package, and probably some more that I already forgot.

My discoveries were:

- evil can easily create new text objects.
- learn to use multiple cursors (although I prefer vim's substitutions, mc work better for multiline "macros", and give you more visual feedback than emacs macros)
- pynt and emacs-ipython-notebook . If I ever do python again, I should remember that.
- use-package has more options that one usually thinks. RTFM
- ggtags is worth looking at if your language is not supported by etags/ctags.
- hydra red/blue modes.

Lots of fun during a couple of hours talking about tooling and worfklows with random techies.See you next time I'm around.
-1:-- SF emacs meetup (Post Raimon Grau ( 26, 2018 04:23 PM

Chris Wellons: Emacs Lisp Lambda Expressions Are Not Self-Evaluating

This week I made a mistake that ultimately enlightened me about the nature of function objects in Emacs Lisp. There are three kinds of function objects, but they each behave very differently when evaluated as objects.

But before we get to that, let’s talk about one of Emacs’ embarrassing, old missteps: eval-after-load.

Taming an old dragon

One of the long-standing issues with Emacs is that loading Emacs Lisp files (.el and .elc) is a slow process, even when those files have been byte compiled. There are a number of dirty hacks in place to deal with this issue, and the biggest and nastiest of them all is the dumper, also known as unexec.

The Emacs you routinely use throughout the day is actually a previous instance of Emacs that’s been resurrected from the dead. Your undead Emacs was probably created months, if not years, earlier, back when it was originally compiled. The first stage of compiling Emacs is to compile a minimal C core called temacs. The second stage is loading a bunch of Emacs Lisp files, then dumping a memory image in an unportable, platform-dependent way. On Linux, this actually requires special hooks in glibc. The Emacs you know and love is this dumped image loaded back into memory, continuing from where it left off just after it was compiled. Regardless of your own feelings on the matter, you have to admit this is a very lispy thing to do.

There are two notable costs to Emacs’ dumper:

  1. The dumped image contains hard-coded memory addresses. This means Emacs can’t be a Position Independent Executable (PIE). It can’t take advantage of a security feature called Address Space Layout Randomization (ASLR), which would increase the difficulty of exploiting some classes of bugs. This might be important to you if Emacs processes untrusted data, such as when it’s used as a mail client, a web server or generally parses data downloaded across the network.

  2. It’s not possible to cross-compile Emacs since it can only be dumped by running temacs on its target platform. As an experiment I’ve attempted to dump the Windows version of Emacs on Linux using Wine, but was unsuccessful.

The good news is that there’s a portable dumper in the works that makes this a lot less nasty. If you’re adventurous, you can already disable dumping and run temacs directly by setting CANNOT_DUMP=yes at compile time. Be warned, though, that a non-dumped Emacs takes several seconds, or worse, to initialize before it even begins loading your own configuration. It’s also somewhat buggy since it seems nobody ever runs it this way productively.

The other major way Emacs users have worked around slow loading is aggressive use of lazy loading, generally via autoloads. The major package interactive entry points are defined ahead of time as stub functions. These stubs, when invoked, load the full package, which overrides the stub definition, then finally the stub re-invokes the new definition with the same arguments.

To further assist with lazy loading, an evaluated defvar form will not override an existing global variable binding. This means you can, to a certain extent, configure a package before it’s loaded. The package will not clobber any existing configuration when it loads. This also explains the bizarre interfaces for the various hook functions, like add-hook and run-hooks. These accept symbols — the names of the variables — rather than values of those variables as would normally be the case. The add-to-list function does the same thing. It’s all intended to cooperate with lazy loading, where the variable may not have been defined yet.


Sometimes this isn’t enough and you need some some configuration to take place after the package has been loaded, but without forcing it to load early. That is, you need to tell Emacs “evaluate this code after this particular package loads.” That’s where eval-after-load comes into play, except for its fatal flaw: it takes the word “eval” completely literally.

The first argument to eval-after-load is the name of a package. Fair enough. The second argument is a form that will be passed to eval after that package is loaded. Now hold on a minute. The general rule of thumb is that if you’re calling eval, you’re probably doing something seriously wrong, and this function is no exception. This is completely the wrong mechanism for the task.

The second argument should have been a function — either a (sharp quoted) symbol or a function object. And then instead of eval it would be something more sensible, like funcall. Perhaps this improved version would be named call-after-load or run-after-load.

The big problem with passing an s-expression is that it will be left uncompiled due to being quoted. I’ve talked before about the importance of evaluating your lambdas. eval-after-load not only encourages badly written Emacs Lisp, it demands it.

;;; BAD!
(eval-after-load 'simple-httpd
                 '(push '("c" . "text/plain") httpd-mime-types))

This was all corrected in Emacs 25. If the second argument to eval-after-load is a function — the result of applying functionp is non-nil — then it uses funcall. There’s also a new macro, with-eval-after-load, to package it all up nicely.

;;; Better (Emacs >= 25 only)
(eval-after-load 'simple-httpd
  (lambda ()
    (push '("c" . "text/plain") httpd-mime-types)))

;;; Best (Emacs >= 25 only)
(with-eval-after-load 'simple-httpd
  (push '("c" . "text/plain") httpd-mime-types))

Though in both of these examples the compiler will likely warn about httpd-mime-types not being defined. That’s a problem for another day.

A workaround

But what if you need to use Emacs 24, as was the situation that sparked this article? What can we do with the bad version of eval-after-load? We could situate a lambda such that it’s evaluated, but then smuggle the resulting function object into the form passed to eval-after-load, all using a backquote.

;;; Note: this is subtly broken
(eval-after-load 'simple-httpd
    ,(lambda ()
       (push '("c" . "text/plain") httpd-mime-types)))

When everything is compiled, the backquoted form evalutes to this:

(funcall #[0 <bytecode> [httpd-mime-types ("c" . "text/plain")] 2])

Where the second value (#[...]) is a byte-code object. However, as the comment notes, this is subtly broken. A cleaner and correct way to solve all this is with a named function. The damage caused by eval-after-load will have been (mostly) minimized.

(defun my-simple-httpd-hook ()
  (push '("c" . "text/plain") httpd-mime-types))

(eval-after-load 'simple-httpd
  '(funcall #'my-simple-httpd-hook))

But, let’s go back to the anonymous function solution. What was broken about it? It all has to do with evaluating function objects.

Evaluating function objects

So what happens when we evaluate an expression like the one above with eval? Here’s what it looks like again.

(funcall #[...])

First, eval notices it’s been given a non-empty list, so it’s probably a function call. The first argument is the name of the function to be called (funcall) and the remaining elements are its arguments. But each of these elements must be evaluated first, and the result of that evaluation becomes the arguments.

Any value that isn’t a list or a symbol is self-evaluating. That is, it evaluates to its own value:

(eval 10)
;; => 10

If the value is a symbol, it’s treated as a variable. If the value is a list, it goes through the function call process I’m describing (or one of a number of other special cases, such as macro expansion, lambda expressions, and special forms).

So, conceptually eval recurses on the function object #[...]. A function object is not a list or a symbol, so it’s self-evaluating. No problem.

;; Byte-code objects are self-evaluating

(let ((x (byte-compile (lambda ()))))
  (eq x (eval x)))
;; => t

What if this code wasn’t compiled? Rather than a byte-code object, we’d have some other kind of function object for the interpreter. Let’s examine the dynamic scope (shudder) case. Here, a lambda appears to evaluate to itself, but appearances can be deceiving:

(eval (lambda ())
;; => (lambda ())

However, this is not self-evaluation. Lambda expressions are not self-evaluating. It’s merely coincidence that the result of evaluating a lambda expression looks like the original expression. This is just how the Emacs Lisp interpreter is currently implemented and, strictly speaking, it’s an implementation detail that just so happens to be mostly compatible with byte-code objects being self-evaluating. It would be a mistake to rely on this.

Instead, dynamic scope lambda expression evaluation is idempotent. Applying eval to the result will return an equal, but not identical (eq), expression. In contrast, a self-evaluating value is also idempotent under evaluation, but with eq results.

;; Not self-evaluating:

(let ((x '(lambda ())))
  (eq x (eval x)))
;; => nil

;; Evaluation is idempotent:

(let ((x '(lambda ())))
  (equal x (eval x)))
;; => t

(let ((x '(lambda ())))
  (equal x (eval (eval x))))
;; => t

So, with dynamic scope, the subtly broken backquote example will still work, but only by sheer luck. Under lexical scope, the situation isn’t so lucky:

;;; -*- lexical-scope: t; -*-

(lambda ())
;; => (closure (t) nil)

These interpreted lambda functions are neither self-evaluating nor idempotent. Passing t as the second argument to eval tells it to use lexical scope, as shown below:

;; Not self-evaluating:

(let ((x '(lambda ())))
  (eq x (eval x t)))
;; => nil

;; Not idempotent:

(let ((x '(lambda ())))
  (equal x (eval x t)))
;; => nil

(let ((x '(lambda ())))
  (equal x (eval (eval x t) t)))
;; error: (void-function closure)

I can imagine an implementation of Emacs Lisp where dynamic scope lambda expressions are in the same boat, where they’re not even idempotent. For example:

;;; -*- lexical-binding: nil; -*-

(lambda ())
;; => (totally-not-a-closure ())

Most Emacs Lisp would work just fine under this change, and only code that makes some kind of logical mistake — where there’s nested evaluation of lambda expressions — would break. This essentially already happened when lots of code was quietly switched over to lexical scope after Emacs 24. Lambda idempotency was lost and well-written code didn’t notice.

There’s a temptation here for Emacs to define a closure function or special form that would allow interpreter closure objects to be either self-evaluating or idempotent. This would be a mistake. It would only serve as a hack that covers up logical mistakes that lead to nested evaluation. Much better to catch those problems early.

Solving the problem with one character

So how do we fix the subtly broken example? With a strategically placed quote right before the comma.

(eval-after-load 'simple-httpd
    ',(lambda ()
        (push '("c" . "text/plain") httpd-mime-types)))

So the form passed to eval-after-load becomes:

;; Compiled:
(funcall (quote #[...]))

;; Dynamic scope:
(funcall (quote (lambda () ...)))

;; Lexical scope:
(funcall (quote (closure (t) () ...)))

The quote prevents eval from evaluating the function object, which would be either needless or harmful. There’s also an argument to be made that this is a perfect situation for a sharp-quote (#'), which exists to quote functions.

-1:-- Emacs Lisp Lambda Expressions Are Not Self-Evaluating (Post)--L0--C0--February 22, 2018 09:30 PM

Alex Bennée: Workbooks for Benchmarking

While working on a major re-factor of QEMU’s softfloat code I’ve been doing a lot of benchmarking. It can be quite tedious work as you need to be careful you’ve run the correct steps on the correct binaries and keeping notes is important. It is a task that cries out for scripting but that in itself can be a compromise as you end up stitching a pipeline of commands together in something like perl. You may script it all in a language designed for this sort of thing like R but then find your final upload step is a pain to implement.

One solution to this is to use a literate programming workbook like this. Literate programming is a style where you interleave your code with natural prose describing the steps you go through. This is different from simply having well commented code in a source tree. For one thing you do not have to leap around a large code base as everything you need is on the file you are reading, from top to bottom. There are many solutions out there including various python based examples. Of course being a happy Emacs user I use one of its stand-out features org-mode which comes with multi-language org-babel support. This allows me to document my benchmarking while scripting up the steps in a variety of “languages” depending on the my needs at the time. Let’s take a look at the first section:

1 Binaries To Test

Here we have several tables of binaries to test. We refer to the
current benchmarking set from the next stage, Run Benchmark.

For a final test we might compare the system QEMU with a reference
build as well as our current build.

Binary title
/usr/bin/qemu-aarch64 system-2.5.log
~/lsrc/qemu/qemu-builddirs/ master.log
~/lsrc/qemu/qemu.git/aarch64-linux-user/qemu-aarch64 softfloat-v4.log

Well that is certainly fairly explanatory. These are named org-mode tables which can be referred to in other code snippets and passed in as variables. So the next job is to run the benchmark itself:

2 Run Benchmark

This runs the benchmark against each binary we have selected above.

    import subprocess
    import os


    for qemu,logname in files:
    cmd="taskset -c 0 %s ./vector-benchmark -n %s | tee %s" % (qemu, tests, logname), shell=True)

        return runs

So why use python as the test runner? Well truth is whenever I end up munging arrays in shell script I forget the syntax and end up jumping through all sorts of hoops. Easier just to have some simple python. I use python again later to read the data back into an org-table so I can pass it to the next step, graphing:

set title "Vector Benchmark Results (lower is better)"
set style data histograms
set style fill solid 1.0 border lt -1

set xtics rotate by 90 right
set yrange [:]
set xlabel noenhanced
set ylabel "nsecs/Kop" noenhanced
set xtics noenhanced
set ytics noenhanced
set boxwidth 1
set xtics format ""
set xtics scale 0
set grid ytics
set term pngcairo size 1200,500

plot for [i=2:5] data using i:xtic(1) title columnhead

This is a GNU Plot script which takes the data and plots an image from it. org-mode takes care of the details of marshalling the table data into GNU Plot so all this script is really concerned with is setting styles and titles. The language is capable of some fairly advanced stuff but I could always pre-process the data with something else if I needed to.

Finally I need to upload my graph to an image hosting service to share with my colleges. This can be done with a elaborate curl command but I have another trick at my disposal thanks to the excellent restclient-mode. This mode is actually designed for interactive debugging of REST APIs but it is also easily to use from an org-mode source block. So the whole thing looks like a HTTP session:

:client_id = feedbeef

# Upload images to imgur
Authorization: Client-ID :client_id
Content-type: image/png

< benchmark.png

Finally because the above dumps all the headers when run (which is very handy for debugging) I actually only want the URL in most cases. I can do this simply enough in elisp:

#+name: post-to-imgur
#+begin_src emacs-lisp :var json-string=upload-to-imgur()
  (when (string-match
         (rx "link" (one-or-more (any "\":" whitespace))
             (group (one-or-more (not (any "\"")))))
    (match-string 1 json-string))

The :var line calls the restclient-mode function automatically and passes it the result which it can then extract the final URL from.

And there you have it, my entire benchmarking workflow document in a single file which I can read through tweaking each step as I go. This isn’t the first time I’ve done this sort of thing. As I use org-mode extensively as a logbook to keep track of my upstream work I’ve slowly grown a series of scripts for common tasks. For example every patch series and pull request I post is done via org. I keep the whole thing in a git repository so each time I finish a sequence I can commit the results into the repository as a permanent record of what steps I ran.

If you want even more inspiration I suggest you look at John Kitchen’s scimax work. As a publishing scientist he makes extensive use of org-mode when writing his papers. He is able to include the main prose with the code to plot the graphs and tables in a single source document from which his camera ready documents are generated. Should he ever need to reproduce any work his exact steps are all there in the source document. Yet another example of why org-mode is awesome 😉

-1:-- Workbooks for Benchmarking (Post Alex)--L0--C0--February 21, 2018 08:34 PM

Chris Wellons: Options for Structured Data in Emacs Lisp

Russian translation by ClipArtMag.
Ukrainian translation by Open Source Initiative.

So your Emacs package has grown beyond a dozen or so lines of code, and the data it manages is now structured and heterogeneous. Informal plain old lists, the bread and butter of any lisp, are not longer cutting it. You really need to cleanly abstract this structure, both for your own organizational sake any for anyone reading your code.

With informal lists as structures, you might regularly ask questions like, “Was the ‘name’ slot stored in the third list element, or was it the fourth element?” A plist or alist helps with this problem, but those are better suited for informal, externally-supplied data, not for internal structures with fixed slots. Occasionally someone suggests using hash tables as structures, but Emacs Lisp’s hash tables are much too heavy for this. Hash tables are more appropriate when keys themselves are data.

Defining a data structure from scratch

Imagine a refrigerator package that manages a collection of food in a refrigerator. A food item could be structured as a plain old list, with slots at specific positions.

(defun fridge-item-create (name expiry weight)
  (list name expiry weight))

A function that computes the mean weight of a list of food items might look like this:

(defun fridge-mean-weight (items)
  (if (null items)
    (let ((sum 0.0)
          (count 0))
      (dolist (item items (/ sum count))
        (setf count (1+ count)
              sum (+ sum (nth 2 item)))))))

Note the use of (nth 2 item) at the end, used to get the item’s weight. That magic number 2 is easy to mess up. Even worse, if lots of code accesses “weight” this way, then future extensions will be inhibited. Defining some accessor functions solves this problem.

(defsubst fridge-item-name (item)
  (nth 0 item))

(defsubst fridge-item-expiry (item)
  (nth 1 item))

(defsubst fridge-item-weight (item)
  (nth 2 item))

The defsubst defines an inline function, so there’s effectively no additional run-time costs for these accessors compared to a bare nth. Since these only cover getting slots, we should also define some setters using the built-in gv (generalized variable) package.

(require 'gv)

(gv-define-setter fridge-item-name (value item)
  `(setf (nth 0 ,item) ,value))

(gv-define-setter fridge-item-expiry (value item)
  `(setf (nth 1 ,item) ,value))

(gv-define-setter fridge-item-weight (value item)
  `(setf (nth 2 ,item) ,value))

This makes each slot setf-able. Generalized variables are great for simplifying APIs, since otherwise there would need to be an equal number of setter functions (fridge-item-set-name, etc.). With generalized variables, both are at the same entrypoint:

(setf (fridge-item-name item) "Eggs")

There are still two more significant improvements.

  1. As far as Emacs Lisp is concerned, this isn’t a real type. The type-ness of it is just a fiction created by the conventions of the package. It would be easy to make the mistake of passing an arbitrary list to these fridge-item functions, and the mistake wouldn’t be caught so long as that list has at least three items. An common solution is to add a type tag: a symbol at the beginning of the structure that identifies it.

  2. It’s still a linked list, and nth has to walk the list (i.e. O(n)) to retrieve items. It would be much more efficient to use a vector, turning this into an efficient O(1) operation.

Addressing both of these at once:

(defun fridge-item-create (name expiry weight)
  (vector 'fridge-item name expiry weight))

(defsubst fridge-item-p (object)
  (and (vectorp object)
       (= (length object) 4)
       (eq 'fridge-item (aref object 0))))

(defsubst fridge-item-name (item)
  (unless (fridge-item-p item)
    (signal 'wrong-type-argument (list 'fridge-item item)))
  (aref item 1))

(defsubst fridge-item-name--set (item value)
  (unless (fridge-item-p item)
    (signal 'wrong-type-argument (list 'fridge-item item)))
  (setf (aref item 1) value))

(gv-define-setter fridge-item-name (value item)
  `(fridge-item-name--set ,item ,value))

;; And so on for expiry and weight...

As long as fridge-mean-weight uses the fridge-item-weight accessor, it continues to work unmodified across all these changes. But, whew, that’s quite a lot of boilerplate to write and maintain for each data structure in our package! Boilerplate code generation is a perfect candidate for a macro definition. Luckily for us, Emacs already defines a macro to generate all this code: cl-defstruct.

(require 'cl)

(cl-defstruct fridge-item
  name expiry weight)

In Emacs 25 and earlier, this innocent looking definition expands into essentially all the above code. The code it generates is expressed in the most optimal form for its version of Emacs, and it exploits many of the available optimizations by using function declarations such as side-effect-free and error-free. It’s configurable, too, allowing for the exclusion of a type tag (:named) — discarding all the type checks — or using a list rather than a vector as the underlying structure (:type). As a crude form of structural inheritance, it even allows for directly embedding other structures (:include).

Two pitfalls

There a couple pitfalls, though. First, for historical reasons, the macro will define two namespace-unfriendly functions: make-NAME and copy-NAME. I always override these, preferring the -create convention for the constructor, and tossing the copier since it’s either useless or, worse, semantically wrong.

(cl-defstruct (fridge-item (:constructor fridge-item-create)
                           (:copier nil))
  name expiry weight)

If the constructor needs to be more sophisticated than just setting slots, it’s common to define a “private” constructor (double dash in the name) and wrap it with a “public” constructor that has some behavior.

(cl-defstruct (fridge-item (:constructor fridge-item--create)
                           (:copier nil))
  name expiry weight entry-time)

(cl-defun fridge-item-create (&rest args)
  (apply #'fridge-item--create :entry-time (float-time) args))

The other pitfall is related to printing. In Emacs 25 and earlier, types defined by cl-defstruct are still only types by convention. They’re really just vectors as far as Emacs Lisp is concerned. One benefit from this is that printing and reading these structures is “free” because vectors are printable. It’s trivial to serialize cl-defstruct structures out to a file. This is exactly how the Elfeed database works.

The pitfall is that once a structure has been serialized, there’s no more changing the cl-defstruct definition. It’s now a file format definition, so the slots are locked in place. Forever.

Emacs 26 throws a wrench in all this, though it’s worth it in the long run. There’s a new primitive type in Emacs 26 with its own reader syntax: records. This is similar to hash tables becoming first class in the reader in Emacs 23.2. In Emacs 26, cl-defstruct uses records instead of vectors.

;; Emacs 25:
(fridge-item-create :name "Eggs" :weight 11.1)
;; => [cl-struct-fridge-item "Eggs" nil 11.1]

;; Emacs 26:
(fridge-item-create :name "Eggs" :weight 11.1)
;; => #s(fridge-item "Eggs" nil 11.1)

So far slots are still accessed using aref, and all the type checking still happens in Emacs Lisp. The only practical change is the record function is used in place of the vector function when allocating a structure. But it does pave the way for more interesting things in the future.

The major short-term downside is that this breaks printed compatibility across the Emacs 25/26 boundary. The cl-old-struct-compat-mode function can be used for some degree of backwards, but not forwards, compatibility. Emacs 26 can read and use some structures printed by Emacs 25 and earlier, but the reverse will never be true. This issue initially tripped up Emacs’ built-in packages, and when Emacs 26 is released we’ll see more of these issues arise in external packages.

Dynamic dispatch

Prior to Emacs 25, the major built-in package for dynamic dispatch — functions that specialize on the run-time type of their arguments — was EIEIO, though it only supported single dispatch (specializing on a single argument). EIEIO brought much of the Common Lisp Object System (CLOS) to Emacs Lisp, including classes and methods.

Emacs 25 introduced a more sophisticated dynamic dispatch package called cl-generic. It focuses only on dynamic dispatch and supports multiple dispatch, completely replacing the dynamic dispatch portion of EIEIO. Since cl-defstruct does inheritance and cl-generic does dynamic dispatch, there’s not really much left for EIEIO — besides bad ideas like multiple inheritance and method combination.

Without either of these packages, the most direct way to build single dispatch on top of cl-defstruct would be to shove a function in one of the slots. Then the “method” is just a wrapper that call this function.

;; Base "class"

(cl-defstruct greeter

(defun greet (thing)
  (funcall (greeter-greeting thing) thing))

;; Cow "class"

(cl-defstruct (cow (:include greeter)
                   (:constructor cow--create)))

(defun cow-create ()
  (cow--create :greeting (lambda (_) "Moo!")))

;; Bird "class"

(cl-defstruct (bird (:include greeter)
                    (:constructor bird--create)))

(defun bird-create ()
  (bird--create :greeting (lambda (_) "Chirp!")))

;; Usage:

(greet (cow-create))
;; => "Moo!"

(greet (bird-create))
;; => "Chirp!"

Since cl-generic is aware of the types created by cl-defstruct, functions can specialize on them as if they were native types. It’s a lot simpler to let cl-generic do all the hard work. The people reading your code will appreciate it, too:

(require 'cl-generic)

(cl-defgeneric greet (greeter))

(cl-defstruct cow)

(cl-defmethod greet ((_ cow))

(cl-defstruct bird)

(cl-defmethod greet ((_ bird))

(greet (make-cow))
;; => "Moo!"

(greet (make-bird))
;; => "Chirp!"

The majority of the time a simple cl-defstruct will fulfill your needs, keeping in mind the gotcha with the constructor and copier names. Its use should feel almost as natural as defining functions.

-1:-- Options for Structured Data in Emacs Lisp (Post)--L0--C0--February 14, 2018 05:43 PM

Alex Schroeder: Buttery Smooth Emacs

This is the best blog post about Emacs in a long time. I’m still laughing. Buttery Smooth Emacs, Friday, October 28, 2016.

«GNU Emacs is an old-school C program emulating a 1980s Symbolics Lisp Machine emulating an old-fashioned Motif-style Xt toolkit emulating a 1970s text terminal emulating a 1960s teletype.»

«Emacs organizes its view of the outside world into frames (what the rest of the world calls “windows”), windows (which the rest of the world calls “panes”), and buffers (which the rest of the world calls “documents”).»

«Did Emacs just adapt to whatever these non-Xt toolkits did? Did Emacs adopt modern best practices? GTK+ is a modern GUI library. Emacs supports GTK+. Is Emacs a well-behaved GTK+ program now?»

«What’s particularly hilarious is that SIGIO can happen in the middle of redisplay. The REPL loop (in the Emacs case, not Read Eval Print, but Read Eval WTF) can be recursive.»

No, really. This blog post just keeps on giving.

(I don’t get why people use Facebook as their blog but whatever, this blog post is great.)


-1:-- Buttery Smooth Emacs (Post)--L0--C0--February 12, 2018 05:55 PM

Tom Tromey: JIT Compilation for Emacs

There have been a few efforts at writing an Emacs JIT — the original one, Burton Samograd’s, and also Nick LLoyd’s. So, what else to do except write my own?

Like the latter two, I based mine on GNU libjit. I did look at a few other JIT libraries: LLVM, gcc-jit, GNU Lightning, MyJit.  libjit seemed like a nice middle ground between a JIT with heavy runtime costs (LLVM, GCC) and one that is too lightweight (Lightning).

All of these Emacs JITs work by compiling bytecode to native code.  Now, I don’t actually think that is the best choice — it’s just the easiest — but my other project to do a more complete job in this area isn’t really ready to be released.  So bytecode it is.

Emacs implements a somewhat weird stack-based bytecode.  Many ordinary things are there, but seemingly obvious stack operations like “swap” do not exist; and there are bytecodes for very specialized Emacs operations like forward-char or point-max.

Samograd describes his implementation as “compiling down the spine”.  What he means by this is that the body of each opcode is implemented by some C function, and the JIT compiler emits, essentially, a series of subroutine calls.  This used to be called “jsr threading” in the olden days, though maybe it has some newer names by now.

Of course, we can do better than this, and Lloyd’s JIT does.  His emits instructions for the bodies of most bytecodes, deferring only a few to helper functions.  This is a better approach because many of these operations are only one or two instructions.

However, his approach takes a wrong turn by deferring stack operations to the compiled code.  For example, in this JIT, the Bdiscard opcode, which simply drops some items from the stack, is implemented as:

 CASE (Bdiscard):
   JIT_INC (ctxt.stack, -sizeof (Lisp_Object));

It turns out, though, that this isn’t needed — at least, for the bytecode generated by the Emacs byte-compiler, the stack depth at any given PC is a constant.  This means that the stack adjustments can be done at compile time, not runtime, leading to a performance boost.  So, the above opcode doesn’t need to emit code at all.

(And, if you’re worried about hand-crafted bytecode, it’s easy to write a little bytecode verifier to avoid JIT-compiling invalid things.  Though of course you shouldn’t worry, since you can already crash Emacs with bad bytecode.)

So, naturally, my implementation does not do this extra work.  And, it inlines more operations besides.


I’ve only enabled the JIT for bytecode that uses lexical binding.  There isn’t any problem enabling it everywhere, I just figured it probably isn’t that useful, and so I didn’t bother.


The results are pretty good.  First of all, I have it set up to automatically JIT compile every function, and this doesn’t seem any slower than ordinary Emacs, and it doesn’t crash.

Using the “silly-loop” example from the Emacs Lisp manual, with lexical binding enabled, I get these results:

Mode Time
Interpreted 4.48
Byte compiled 0.91
JIT 0.26

This is essentially the best case for this JIT, though.

Future Directions

I have a few ideas for how to improve the performance of the generated code.  One way to look at this is to look at Emacs’ own C code, to see what advantages it has over JIT-compiled code.  There are really three: cheaper function calls, inlining, and unboxing.

Calling a function in Emacs Lisp is quite expensive.  A call from the JIT requires marshalling the arguments into an array, then calling Ffuncall; which then might dispatch to a C function (a “subr”), the bytecode interpreter, or the ordinary interpreter.  In some cases this may require allocation.

This overhead applies to nearly every call — but the C implementation of Emacs is free to call various primitive functions directly, without using Ffuncall to indirect through some Lisp symbol.

Now, these direct calls aren’t without a cost: they prevent the modification of some functions from Lisp.  Sometimes this is a pain (it might be handy to hack on load from Lisp), but in many cases it is unimportant.

So, one idea for the JIT is to keep a list of such functions and then emit direct calls rather than indirect ones.

Even better than this would be to improve the calling convention so that all calls are less expensive.  However, because a function can be redefined with different arguments, it is tricky to see how to do this efficiently.

In the Emacs C code, many things are inlined that still aren’t inlined in the JIT — just look through lisp.h for all the inline functions (and/or macros, lisp.h is “unusual”).  Many of these things could be done in the JIT, though in some cases it might be more work than it is worth.

Even better, but also even more difficult, would be inlining from one bytecode function into another.  High-performance JITs do this when they notice a hot spot in the code.

Finally, unboxing.  In the Emacs C code, it’s relatively normal to type-check Lisp objects and then work solely in terms of their C analogues after that point.  This is more efficient because it hoists the tag manipulations.  Some work like this could be done automatically, by writing optimization passes for libjit that work on libjit’s internal representation of functions.

Getting the Code

The code is on the libjit branch in my Emacs repository on github.  You’ll have to build your own libjit, too, and if you want to avoid hacking on the Emacs Makefile, you will need my fork of libjit that adds pkg-config files.

-1:-- JIT Compilation for Emacs (Post tom)--L0--C0--February 08, 2018 11:05 PM

Alex Bennée: FOSDEM 2018

I’ve just returned from a weekend in Brussels for my first ever FOSDEM – the Free and Open Source Developers, European Meeting. It’s been on my list of conferences to go to for some time and thanks to getting my talk accepted, my employer financed the cost of travel and hotels. Thanks to the support of the Université libre de Bruxelles (ULB) the event itself is free and run entirely by volunteers. As you can expect from the name they also have a strong commitment to free and open source software.

The first thing that struck me about the conference is how wide ranging it was. There were talks on everything from the internals of debugging tools to developing public policy. When I first loaded up their excellent companion app (naturally via the F-Droid repository) I was somewhat overwhelmed by the choice. As it is a free conference there is no limit on the numbers who can attend which means you are not always guarenteed to be able to get into every talk. In fact during the event I walked past many long queues for the more popular talks. In the end I ended up just bookmarking all the talks I was interested in and deciding which one to go to depending on how I felt at the time. Fortunately FOSDEM have a strong archiving policy and video most of their talks so I’ll be spending the next few weeks catching up on the ones I missed.

There now follows a non-exhaustive list of the most interesting ones I was able to see live:

Dashamir’s talk on EasyGPG dealt with the opinionated decisions it makes to try and make the use of GnuPG more intuitive to those not versed in the full gory details of public key cryptography. Although I use GPG mainly for signing GIT pull requests I really should make better use it over all. The split-key solution to backups was particularly interesting. I suspect I’ll need a little convincing before I put part of my key in the cloud but I’ll certainly check out his scripts.

Liam’s A Circuit Less Travelled was an entertaining tour of some of the technologies and ideas from early computer history that got abandoned on the wayside. These ideas were often to be re-invented in a less superior form as engineers realised the error of their ways as technology advanced. The later half of the talk turns into a bit of LISP love-fest but as an Emacs user with an ever growing config file that is fine by me 😉

Following on in the history vein was Steven Goodwin’s talk on Digital Archaeology which was a salutatory reminder of the amount of recent history that is getting lost as computing’s breakneck pace has discarded old physical formats in lieu of newer equally short lived formats. It reminded me I should really do something about the 3 boxes of floppy disks I have under my desk. I also need to schedule a visit to the Computer History Museum with my children seeing as it is more or less on my doorstep.

There was a tongue in check preview that described the EDSAC talk as recreating “an ancient computer without any of the things that made it interesting”. This was was a little unkind. Although the project re-implemented the computation parts in a tiny little FPGA the core idea was to introduce potential students to the physicality of the early computers. After an introduction to the hoary architecture of the original EDSAC and the Wheeler Jump Mary introduced the hardware they re-imagined for the project. The first was an optical reader developed to read in paper tapes although this time ones printed on thermal receipt paper. This included an in-depth review of the problems of smoothing out analogue inputs to get reliable signals from their optical sensors which mirrors the problems the rebuild is facing with nature of the valves used in EDSAC. It is a shame they couldn’t come up with some way to involve a valve but I guess high-tension supplies and school kids don’t mix well. However they did come up with a way of re-creating the original acoustic mercury delay lines but this time with a tube of air and some 3D printed parabolic ends.

The big geek event was the much anticipated announcement of RISC-V hardware during the RISC-V enablement talk. It seemed to be an open secret the announcement was coming but it still garnered hearty applause when it finally came. I should point out I’m indirectly employed by companies with an interest in a competing architecture but it is still good to see other stuff out there. The board is fairly open but there are still some peripheral IPs which were closed which shows just how tricky getting to fully-free hardware is going to be. As I understand the RISC-V’s licensing model the ISA is open (unlike for example an ARM Architecture License) but individual companies can still have closed implementations which they license to be manufactured which is how I assume SiFive funds development. The actual CPU implementation is still very much a black box you have to take on trust.

Finally for those that are interested my talk is already online for those that are interested in what I’m currently working on. The slides have been slightly cropped in the video but if you follow the link to the HTML version you can read along on your machine.

I have to say FOSDEM’s setup is pretty impressive. Although there was a volunteer in each room to deal with fire safety and replace microphones all the recording is fully automated. There are rather fancy hand crafted wooden boxes in each room which take the feed from your laptop and mux it with the camera. I got the email from the automated system asking me to review a preview of my talk about half and hour after I gave it. It took a little longer for the final product to get encoded and online but it’s certainly the nicest system I’ve come across so far.

All in all I can heartily recommend FOSDEM for anyone in an interest is FLOSS. It’s a packed schedule and there is going to be something for everyone there. Big thanks to all the volunteers and organisers and I hope I can make it next year 😉

-1:-- FOSDEM 2018 (Post Alex)--L0--C0--February 06, 2018 09:36 AM

Manuel Uberti: Getting ready for Dutch Clojure Days

As a functional programming jock, I have a confession to make: I have never been to a Clojure conference. There was a bit of Clojure in the now defunct LambdaCon, but the talks there were not as riveting as the ones I caught on YouTube from the likes of Clojure/conj, clojuTRE and Clojure/west.

No need to sound depressing, though. Thanks to 7bridges, I will happily attend Dutch Clojure Days 2018 on April 21st. As much as my enthusiasm is hard to contain, I plan to fulfil a bunch of resolutions without losing myself in total exuberance.

Learn something new

I know the list of speakers is not ready yet, but surely something new and good is waiting for me. This is usually what happens with the talks once the conference I missed makes the videos available, therefore I am pretty confident there is going to be a lot of food for my brain.

Learn something better

As far as my Clojure projects go, there is still plenty I have to master. Transducers? Spec? Design patterns? Performance? UX? Hit me, please. The amateur in me is eager to become a Clojure programmer worth his salt.

Join the community

Last but not least, I will set aside my never-ending fight with sociability and enjoy the Clojure community for real. I’ll be in Amsterdam from Friday afternoon to Sunday evening, so you will have enough time to join me in some healthy discussions about your favourite programming language. Or Emacs, if you fancy wild topics.

-1:-- Getting ready for Dutch Clojure Days (Post)--L0--C0--January 24, 2018 12:00 AM

Mathias Dahl: Make a copy of saved files to another directory

For various reasons I needed to sync files between one folder to another as soon as a certain file was saved in the first folder. I was wondering if Emacs could do this for me, and of course it could :)

Basically, what I am using below is Emacs' `after-save-hook' together with a list of regular expressions matching files to be "synced" and the target folder to copy the files to. Each time I save a file, the list of regexps will be checked and if there is a match, the file will also be copied to the defined target directory. Neat!

It works very well so I thought of sharing it in this way. Also, it was a long time since I wrote a blog post here... :)

Put the following code in your .emacs or init.el file and then customize after-save-file-sync-regexps.


;; The Code

(defcustom after-save-file-sync-regexps nil
  "A list of cons cells consisting of two strings. The `car' of
each cons cell is the regular expression matching the file(s)
that should be copied, and the `cdr' is the target directory."
  :group 'files
  :type '(repeat (cons string string)))

(defcustom after-save-file-sync-ask-if-overwrite nil
  "Ask the user before overwriting the destination file.
When set to a non-`nil' value, the user will be asked. When
`nil', the file will be copied without asking"
  :group 'files
  :type 'boolean)

(defun after-save-file-sync ()
  "Sync the current file if it matches one of the regexps.

This function will match each regexp in
`after-save-file-sync-regexps' against the current file name. If
there is a match, the current file will be copied to the
configured target directory.

If the file already exist target directory, the option
`after-save-file-sync-ask-if-overwrite' will control if the file
should be written automatically or if the user should be
presented with a question.

In theory, the same file can be copied to multiple target
directories, by configuring multiple regexps that match the same

  (dolist (file-regexp after-save-file-sync-regexps)
    (when (string-match (car file-regexp) (buffer-file-name))
      (let ((directory (file-name-as-directory (cdr file-regexp))))
        (copy-file (buffer-file-name) directory (if after-save-file-sync-ask-if-overwrite 1 t))
        (message "Copied file to %s" directory)))))

(add-hook 'after-save-hook 'after-save-file-sync)

;; The End

-1:-- Make a copy of saved files to another directory (Post Mathias Dahl ( 23, 2018 08:19 PM

Phil Hagelberg: in which the cost of structured data is reduced

Last year I got the wonderful opportunity to attend RacketCon as it was hosted only 30 minutes away from my home. The two-day conference had a number of great talks on the first day, but what really impressed me was the fact that the entire second day was spent focusing on contribution. The day started out with a few 15- to 20-minute talks about how to contribute to a specific codebase (including that of Racket itself), and after that people just split off into groups focused around specific codebases. Each table had maintainers helping guide other folks towards how to work with the codebase and construct effective patch submissions.

lensmen chronicles

I came away from the conference with a great sense of appreciation for how friendly and welcoming the Racket community is, and how great Racket is as a swiss-army-knife type tool for quick tasks. (Not that it's unsuitable for large projects, but I don't have the opportunity to start any new large projects very frequently.)

The other day I wanted to generate colored maps of the world by categorizing countries interactively, and Racket seemed like it would fit the bill nicely. The job is simple: show an image of the world with one country selected; when a key is pressed, categorize that country, then show the map again with all categorized countries colored, and continue with the next country selected.

I have yet to see a language/framework more accessible and straightforward out of the box for drawing1. Here's the entry point which sets up state and then constructs a canvas that handles key input and display:

(define (main path)
  (let ([frame (new frame% [label "World color"])]
        [categorizations (box '())]
        [doc (call-with-input-file path read-xml/document)])
    (new (class canvas%
           (define/override (on-char event)
             (handle-key this categorizations (send event get-key-code)))
         [parent frame]
         [paint-callback (draw doc categorizations)])
    (send frame show #t)))

While the class system is not one of my favorite things about Racket (most newer code seems to avoid it in favor of generic interfaces in the rare case that polymorphism is truly called for), the fact that classes can be constructed in a light-weight, anonymous way makes it much less onerous than it could be. This code sets up all mutable state in a box which you use in the way you'd use a ref in ML or Clojure: a mutable wrapper around an immutable data structure.

The world map I'm using is an SVG of the Robinson projection from Wikipedia. If you look closely there's a call to bind doc that calls call-with-input-file with read-xml/document which loads up the whole map file's SVG; just about as easily as you could ask for.

The data you get back from read-xml/document is in fact a document struct, which contains an element struct containing attribute structs and lists of more element structs. All very sensible, but maybe not what you would expect in other dynamic languages like Clojure or Lua where free-form maps reign supreme. Racket really wants structure to be known up-front when possible, which is one of the things that help it produce helpful error messages when things go wrong.

Here's how we handle keyboard input; we're displaying a map with one country highlighted, and key here tells us what the user pressed to categorize the highlighted country. If that key is in the categories hash then we put it into categorizations.

(define categories #hash((select . "eeeeff")
                         (#\1 . "993322")
                         (#\2 . "229911")
                         (#\3 . "ABCD31")
                         (#\4 . "91FF55")
                         (#\5 . "2439DF")))

(define (handle-key canvas categorizations key)
  (cond [(equal? #\backspace key) ; undo
         (swap! categorizations cdr)]
        [(member key (dict-keys categories)) ; categorize
         (swap! categorizations (curry cons key))]
        [(equal? #\space key) ; print state
         (display (unbox categorizations))])
  (send canvas refresh))

Finally once we have a list of categorizations, we need to apply it to the map document and display. We apply a fold reduction over the XML document struct and the list of country categorizations (plus 'select for the country that's selected to be categorized next) to get back a "modified" document struct where the proper elements have the style attributes applied for the given categorization, then we turn it into an image and hand it to draw-pict:

(define (update original-doc categorizations)
  (for/fold ([doc original-doc])
            ([category (cons 'select (unbox categorizations))]
             [n (in-range (length (unbox categorizations)) 0 -1)])
    (set-style doc n (style-for category))))

(define ((draw doc categorizations) _ context)
  (let* ([newdoc (update doc categorizations)]
         [xml (call-with-output-string (curry write-xml newdoc))])
    (draw-pict (call-with-input-string xml svg-port->pict) context 0 0)))

The problem is in that pesky set-style function. All it has to do is reach deep down into the document struct to find the nth path element (the one associated with a given country), and change its 'style attribute. It ought to be a simple task. Unfortunately this function ends up being anything but simple:

;; you don't need to understand this; just grasp how huge/awkward it is
(define (set-style doc n new-style)
  (let* ([root (document-element doc)]
         [g (list-ref (element-content root) 8)]
         [paths (element-content g)]
         [path (first (drop (filter element? paths) n))]
         [path-num (list-index (curry eq? path) paths)]
         [style-index (list-index (lambda (x) (eq? 'style (attribute-name x)))
                                  (element-attributes path))]
         [attr (list-ref (element-attributes path) style-index)]
         [new-attr (make-attribute (source-start attr)
                                   (source-stop attr)
                                   (attribute-name attr)
         [new-path (make-element (source-start path)
                                 (source-stop path)
                                 (element-name path)
                                 (list-set (element-attributes path)
                                           style-index new-attr)
                                 (element-content path))]
         [new-g (make-element (source-start g)
                              (source-stop g)
                              (element-name g)
                              (element-attributes g)
                              (list-set paths path-num new-path))]
         [root-contents (list-set (element-content root) 8 new-g)])
    (make-document (document-prolog doc)
                   (make-element (source-start root)
                                 (source-stop root)
                                 (element-name root)
                                 (element-attributes root)
                   (document-misc doc))))

The reason for this is that while structs are immutable, they don't support functional updates. Whenever you're working with immutable data structures, you want to be able to say "give me a new version of this data, but with field x replaced by the value of (f (lookup x))". Racket can do this with dictionaries but not with structs2. If you want a modified version you have to create a fresh one3.

first lensman

When I brought this up in the #racket channel on Freenode, I was helpfully pointed to the 3rd-party Lens library. Lenses are a general-purpose way of composing arbitrarily nested lookups and updates. Unfortunately at this time there's a flaw preventing them from working with xml structs, so it seemed I was out of luck.

But then I was pointed to X-expressions as an alternative to structs. The xml->xexpr function turns the structs into a deeply-nested list tree with symbols and strings in it. The tag is the first item in the list, followed by an associative list of attributes, then the element's children. While this gives you fewer up-front guarantees about the structure of the data, it does work around the lens issue.

For this to work, we need to compose a new lens based on the "path" we want to use to drill down into the nth country and its style attribute. The lens-compose function lets us do that. Note that the order here might be backwards from what you'd expect; it works deepest-first (the way compose works for functions). Also note that defining one lens gives us the ability to both get nested values (with lens-view) and update them.

(define (style-lens n)
  (lens-compose (dict-ref-lens 'style)
                (list-ref-lens (add1 (* n 2)))
                (list-ref-lens 10)))

Our <path> XML elements are under the 10th item of the root xexpr, (hence the list-ref-lens with 10) and they are interspersed with whitespace, so we have to double n to find the <path> we want. The second-lens call gets us to that element's attribute alist, and dict-ref-lens lets us zoom in on the 'style key out of that alist.

Once we have our lens, it's just a matter of replacing set-style with a call to lens-set in our update function we had above, and then we're off:

(define (update doc categorizations)
  (for/fold ([d doc])
            ([category (cons 'select (unbox categorizations))]
             [n (in-range (length (unbox categorizations)) 0 -1)])
    (lens-set (style-lens n) d (list (style-for category)))))
second stage lensman

Often times the trade-off between freeform maps/hashes vs structured data feels like one of convenience vs long-term maintainability. While it's unfortunate that they can't be used with the xml structs4, lenses provide a way to get the best of both worlds, at least in some situations.

The final version of the code clocks in at 51 lines and is is available on GitLab.

[1] The LÖVE framework is the closest thing, but it doesn't have the same support for images as a first-class data type that works in the repl.

[2] If you're defining your own structs, you can make them implement the dictionary interface, but with the xml library we have to use the struct definitions provided us.

[3] Technically you can use the struct-copy function, but it's not that much better. The field names must be provided at compile-time, and it's no more efficient as it copies the entire contents instead of sharing internal structure. And it still doesn't have an API that allows you to express the new value as a function of the old value.

[4] Lenses work with most regular structs as long as they are transparent and don't use subtyping. Subtyping and opaque structs are generally considered bad form in modern Racket, but you do find older libraries that use them from time to time.

-1:-- in which the cost of structured data is reduced (Post Phil Hagelberg)--L0--C0--January 12, 2018 07:53 PM

Timo Geusch: Emacs within Emacs within Emacs…

A quick follow-up to my last post where I was experimenting with running emacsclient from an ansi-term running in the main Emacs. Interestingly, you can run Emacs in text mode within an ansi-term, just not emacsclient: Yes, the whole thing Read More

The post Emacs within Emacs within Emacs… appeared first on The Lone C++ Coder's Blog.

-1:-- Emacs within Emacs within Emacs… (Post Timo Geusch)--L0--C0--January 10, 2018 05:14 AM

emacspeak: Updating Voxin TTS Server To Avoid A Possible ALSA Bug

Updating Voxin TTS Server To Avoid A Possible ALSA Bug

1 Summary

I recently updated to a new Linux laptop running the latest Debian
(Rodete). The upgrade went smoothly, but when I started using the
machine, I found that the Emacspeak TTS server for Voxin (Outloud)
crashed consistently; here, consistently equated to crashing on short
utterances which made typing or navigating by character an extremely
frustrating experience.

I fixed the issue by creating a work-around in the TTS server
— if you run into this issue, make sure to update and rebuild from GitHub; alternatively, you'll find an updated in the servers/linux-outloud/lib/ directory after a
git update that you can copy over to your servers/linux-outloud

2 What Was Crashing

I use a DMIX plugin as the default device — and have many ALSA
virtual devices that are defined in terms of this device — see my
asoundrc. With this configuration, writing to the ALSA device was
raising an EPIPE error — normally this error indicates a buffer
underrun — that's when ALSA is starved of audio data. But in many
of these cases, the ALSA device was still in a RUNNING rather than
an XRUN state — this caused the Emacspeak server to
abort. Curiously, this happened only sporadically — and from my
experimentation only happened when there were multiple streams of
audio active on the machine.
A few Google searches showed threads on the alsa/kernel devel lists
that indicated that this bug was present in the case of DMIX devices
— it was hard to tell if the patch that was submitted on the
alsa-devel list had made it into my installation of Debian.

3 Fixing The Problem

My original implementation of function xrun had been cloned from
aplay.c about 15+ years ago — looking at the newest aplay
implementation, little to nothing had changed there. I finally worked
around the issue by adding a call to


whenever ALSA raised an EPIPE error during write — with the ALSA
device state in a RUNNING rather than an XRUN state. This
appears to fix the issue.

-1:-- Updating Voxin TTS Server To  Avoid A Possible ALSA Bug (Post T. V. Raman ( 08, 2018 06:06 PM

Rubén Berenguel: 2017: Year in Review

I’m trying to make these posts a tradition (even if a few days late). I thought 2016 had been a really weird and fun year, but 2017 has beaten it easily. And I only hope 2018 will be even better in every way. For the record, when I say we, it means Laia and me unless explicitly changed.

Beware, some of the links are affiliate links. I only recommend what I have and like though, get at your own risk :)


Everything work related has gone up. More work, better work, more interesting work. Good, isn’t it?

As far as my consulting job in London, the most relevant parts would be:

  • Lead a rewrite and refactor of the adserver (Golang) to improve speed and reliability.
  • Migrated a batch job from Apache Pig to Apache Spark to be able to cope with larger amounts of data from third parties (now we process 2x the data with 1/10th of the cost).
  • Planned an upgrade of our Kafka cluster from Kafka 0.8.2 to Kafka 0.10.1, which we could not execute as well as planned because the cluster went down. Helped save that day together with the director of engineering when that happened.
  • Was part of the hiring team, we’ve had one successful hire this year (passed probation, is an excellent team member and loves weird tech). Hopefully we enlarge our team much more in the coming year.
  • Put a real time service in Akka in production, serving and evaluating models generated by a Spark batch job.
We also moved offices, now we have a free barista “on premises”. Free, good quality coffee is the best that can be done to improve my productivity.

In April I got new business cards (designed by Laia, you can get your own design if you want, contact her):

I kept on helping a company with its SEO efforts, and as usual patience works. Search traffic has improved 30% year-to-year, so I’m pretty happy with it. Let’s see what the new year brings.

I became technical advisor of a local startup (an old friend, PhD in maths is a founder and works there as data scientist/engineer/whatever), trying to bring data insights to small and medium retailers. I help them with technology decisions where I have more hand-to-hand experience, or know where the industry is moving.


Traveling up and down as usual (2-3 weeks in London, then l’Arboç, then maybe somewhere else…) sprinkled with some conferences and holidays.

Regarding life, the universe and everything, what I’ve done and where I’ve been
  • In February we visited Hay-on-Wye again, for my birthday
  • In March I convinced Holden Karau (was easy: she loves talking about Spark :D) to be one of our great keynote speakers at PyData Barcelona 2017
  • In late March we visited Edinburgh and Glasgow
  • In early May I attended PyData London to be able to prepare better for ours. Met some great people there.
  • A bit later in May I visited Lisbon for LX Scala, thanks Jorge and the rest for the great work
  • And at the end of May, we held PyData Barcelona 2017, where I was one of the organisers. We had more than 300 attendees, enjoying a lot of interesting talks. Thanks to all attendees and the rest of the organising committee... We made a hell of a great conference
  • Mid-June, I gave my first meetup presentation, Snakes and Ladders (about typing in Python as compared with Scala) in the PyBCN meetup
  • In late June, we visited Cheddar and Wells
  • In September I visited Penrith for the awesome (thanks Jon) Scala World 2017. Looking forward to the 2019 edition.
  • In early October we visited San Sebastian for the Python San Sebastian 2017 conference. We ate terribly well there (we can recommend Bodegón Alejandro as one of the best places to eat anywhere in the world now)
  • Mid-October we visited Bletchley Park. Nice.
  • In late October we (Ernest Fontich and myself) submitted our paper Normal forms and Sternberg conjugation theorems for infinite dimensional coupled map lattice. Now we need to wait.
  • In November we visited Brussels (Ghent and Brugge too), and took an unofficial tour of the European Council with a friend who works there.
  • In December I attended for the second time Scala Exchange, and the extra community day (excellent tutorials by Heiko Seeberger and Travis Brown). Was even better than last year (maybe because I knew more people?) and I already got my tickets for next year.
  • In December we attended a wine and cheese pairing (with Francesc, our man in Brussels, and Laia) at Parés Baltà. They follow biodynamic principles (no herbicides, as natural as they can get, etc) and offer added sulfite free wines, too. They are excellent: neither Laia nor I drink, and we bought 4 bottles of their wines and cavas.
Last year I decided to start contributing to open source software this year, and I managed to become a contributor to the following projects:

I wanted to contribute to the Go compiler code base, but didn’t find an interesting issue. Maybe this year.


This year I didn’t push courses/learning as strongly as last year... Or at least this is what I thought before writing this post.

  • In August I took Apache Kafka Series - Learn Apache Kafka for Beginners, with the rest of the courses in the series waiting for me having more time available.
  • In September I tried to learn knitting and lace, but it does not seem to suit me.
  • In September I enrolled in a weekly Taichi and Qi Gong course by Mei Quan Tai Chi. Will repeat for the next term
  • In December I started learning about Cardistry


I have read slightly less than last year (36 books vs 44 last year), and the main victim has been fiction. Haven’t read much, and the best... has been the re-read of Zelazny’s Chronicles of Amber. Still excellent. I have enlarged my collection of Zelazny books, now I have more than 30.

As far as non-fiction goes, I have specially enjoyed:
  • Essentialism: given how many things I do at once, this book felt quite refreshing
  • Rich dad, poor dad: Nothing too fancy, just common sense. Invest on having assets (money-generating items) instead of liabilities (money-sucking items, like the house you live in)
  • 10% Entrepreneur: Links very well with the above. Being a 10% entrepreneur is a natural way to invest in your assets.
  • The Checklist Manifesto: Checklists are a way to automate your life. I have read several books around this concept (“creating and tweaking systems”, as a concept) and it resonates with me. If I can automate (even if I’m the machine), it’s a neat win.
  • The Subtle Art of Not Giving a F*ck: Recommended read. For no particular reason. I’ve heard that the audiobook version is great, too.


This year I have listened mostly to Sonata Arctica. We attended their concert in Glasgow (March) and it was awesome, they are really good live. This was a build up for KISS at the O2 in London (May) which was totally terrific. And followed by Bat Out of Hell (opening day!) in London. It was great, and probably the closest I’ll ever be to listening Meat Loaf live. Lately I’ve been listening to a very short playlist I have by Loquillo, and also Anachronist.

We have also attended a performance by Penn and Teller (excellent), and IIRC we have also watched just one screening: The Last Jedi (meh, but Laia liked it).


This year I have gotten hold of a lot of gadgets. I mention only the terribly useful or interesting
  • From last year, iPhone 7 “small”. Not happy with it. Battery life sucks big time, I got a external Mophie battery for it.
  • Mid-year: Apple Watch Series 2. Pretty cool, and more useful than I expected.
  • Late this year: AirPods. THEY ARE AWESOME
  • Laptop foldable cooling support. While taking the deep learning course my Air got very hot, and I needed some way to get it as cool as possible.
  • Nutribullet. My morning driver is banana, Kit Kat chunky, milk, golden flax seed, guarana.
  • Icebreaker merino underwear. I sweat a bit, and get easily chaffed on the side of my legs (where it contacts my underwear). Not any more: not only is wool better at sweat-handling, but the fabric also feels better on the skin. And not, does not feel hot in the summer.
  • Double Edge Shaving. I hated shaving (and actually just kept my beard trimmed so it was never a real beard or a clean shave...) and this razor (not this one specifically, safety razors are pretty much all the same) has changed that. Now I shave regularly and enjoy it a lot (together with this soap and this after shave balm)
  • Chilly bottles. They work really well to keep drinks cold or hot. I’ll be getting their food container soon.
  • Plenty of lightning cables. You can never have enough of these. I also got this great multi-device charger, ideal for traveling.
  • Compact wallet. I’ve been shown the ads so many times I finally moved from my Tyvek wallets to one from Bellroy. It is very good.
  • Book darts. Small bookmarks that don’t get lost, look great and can double as line markers. Also, they don’t add bulk to a book, so you can have many in the same book without damaging it at all. They are great, I’m getting a second tin in my next Amazon order of stuff.
  • Two frames from an artist I saw showcased in our previous office (they had exhibits downstairs). Blue Plaque Doors and Hatchard’s, by Luke Adam Hawker.
On the fun side, I also have a spiral didgeridoo, a proper Scottish bagpipes, a Lego Mindstorms I have not played with yet :( and an Arduboy. Oh, and a Raspberry Pi Zero Wireless.

-1:-- 2017: Year in Review (Post Rubén Berenguel ( 06, 2018 02:31 PM

Wilfred Hughes: The Emacs Guru Guide to Key Bindings

Imagine that you hold Control and type your name into Emacs. Can you describe what will happen?

– The ‘Emacs Guru Test’

Emacs shortcuts (known as ‘key bindings’) can seem ridiculous to beginners. Some Emacs users even argue you should change them as soon as you start using Emacs.

They are wrong. In this post, I’ll describe the logic behind the Emacs key bindings. Not only will you be closer to passing the guru test, but you might even find you like some of the defaults!

There Are How Many?

Emacs has a ton of key bindings.

ELISP> (length global-map)

Emacs is a modal editor, so most key bindings are mode-specific. However, my current Emacs instance has well over a hundred global shortcuts that work everywhere.

(Keymaps are nested data structures, so this actually undercounts! For example, C-h C-h and C-h f are not counted separately.)

Even that is a drop in the bucket compared with how many commands we could define key bindings for.

ELISP> (let ((total 0))
(lambda (sym)
(when (commandp sym)
(setq total (1+ total)))))

How can we possibly organise all these commands?

Mnemonic Key Bindings

Basic commands are often given key bindings based on their name. You’ll encounter all of these important commands in the Emacs tutorial.

Command Key Binding
eXecute-extended-command M-x
Next-line C-n
Previous-line C-p
Forward-char C-f
Backward-car C-b
iSearch-forward C-s

Mnemonics are a really effective way of memorising things. If you can remember the name of the command, you can probably remember the key binding too.

Organised Key Bindings

Many Emacs movement commands are laid out in a consistent pattern.

For example, movement by certain amount:

Command Key Binding
forward-char C-f
forward-word M-f
forward-sexp C-M-f

Moving to the end of something:

Command Key Binding
move-end-of-line C-e
forward-sentence M-e
end-of-defun C-M-e

Transposing, which swaps text either side of the cursor:

Command Key Binding
transpose-chars C-t
transpose-words M-t
transpose-sexps C-M-t

Killing text:

Command Key Binding
kill-line C-k
kill-sentence M-k
kill-sexp C-M-k

Have you spotted the pattern?

The pattern here is that C-whatever commands are usually small, dumb text operations. M-whatever commands are larger, and usually operate on words.

C-M-whatever commands are slightly magical. These commands understand the code they’re looking at, and operate on whole expressions. Emacs uses the term ‘sexp’ (s-expression), but these commands usually work in any programming language!

Discovering Key Bindings

What happens when you press C-a? Emacs can tell you. C-h k C-a will show you exactly what command is run.

If you use a command without its key binding, Emacs will helpfully remind you there’s a shortcut available.

You can even do this backwards! If Emacs has done something neat or unexpected, you might wonder what command ran. C-h l will reveal what the command was, and exactly which keys triggered it.

Room For Emacs

Why are Emacs key bindings different from conventional shortcuts? Why doesn’t C-c copy text to the clipboard, like many other programs?

Emacs uses mnemonics for its clipboard commands: you ‘kill’ and ‘yank’ text, so the key bindings are are C-k and C-y. If you really want, you can use cua-mode so C-x acts as you expect.

The problem is that Emacs commands are too versatile, too general to fit in the usual C-x, C-c, C-v. Emacs has four clipboard commands:

  1. kill: remove text and insert it into the kill-ring. This is like clipboard cut, but you can do it multiple times and Emacs will remember every item in your clipboard.
  2. kill-ring-save: copy the selected text into the kill-ring. This is like clipboard copy, but you can also do this multiple times.
  3. yank: insert text from the kill-ring. This is like clipboard paste.
  4. yank-pop: replace the previously yanked text with the next item in the kill ring. There is no equivalent in a single-item clipboard!

The generality of Emacs means that it’s hard to find a key binding for everything. Key bindings tend to be slightly longer as a result: opening a file is C-x C-f, an additional keystroke over the C-o of other programs.

Room For You

With all these key bindings already defined, what bindings should you use for your personal favourite commands?

Much like IP addresses 192.168.x.x is reserved for private use, Emacs has keys that are reserved for user configuration. All the sequences C-c LETTER, such as C-c a, are reserved for your usage, as are <F5> through to <F9>.

For example, if you find yourself using imenu a lot, you might bind C-c i:

(global-set-key (kbd "C-c i") #'imenu)

You Make The Rules

This doesn’t mean that you should never modify key bindings. Emacsers create weird and wonderful ways of mapping keys all the time.

Emacs will even try to accommodate this. If you open the tutorial after changing a basic key binding, it will update accordingly!

The secret to mastering Emacs is to remember everything is self-documenting. Learn the help commands to find out which commands have default key bindings. Consider following the existing patterns when you define new key bindings or override existing ones. org-mode, for example, redefines C-M-t to transpose org elements.

Once you understand the patterns, you’ll know when to follow and when to break them. You’ll also be much closer to passing that guru test!

-1:-- The Emacs Guru Guide to Key Bindings (Post Wilfred Hughes ( 06, 2018 12:00 AM

Emacs Redux: A Crazy Productivity Boost: Remapping Return to Control (2017 Edition)

Back in 2013 I wrote about my favourite productivity boost in Emacs, namely remapping Return to Control, which in combination with the classic remapping of CapsLock to Control makes it really easy to get a grip on Emacs’s obsession with the Control key.

In the original article I suggested to OS X (now macOS) users the tool KeyRemap4MacBook, which was eventually renamed to Karabiner. Unfortunately this tool stopped working in macOS Sierra, due to some internal kernel architecture changes.

That was pretty painful for me as it meant that on my old MacBook I couldn’t upgrade to the newest macOS editions and on my new MacBook I couldn’t type properly in Emacs (as it came with Sierra pre-installed)… Bummer!

Fortunately 2 years later this is finally solved - the Karabiner team rewrote Karabiner from scratch for newer macOS releases and recently added my dream feature to the new Karabiner Elements. Unlike in the past though, this remapping is not actually bundled with Karabiner by default, so you have to download and enable it manually from here.

That’s actually even better than what I had originally suggested, as here it’s also suggested to use CapsLock with a dual purpose as well - Control when held down and Escape otherwise. I have no idea how this never came to my mind, but it’s truly epic! A crazy productivity boost just got even crazier!


-1:-- A Crazy Productivity Boost: Remapping Return to Control (2017 Edition) (Post)--L0--C0--December 31, 2017 09:22 AM

Emacs Redux: Into to CIDER

CIDER is a popular Clojure programming environment for Emacs.

In a nutshell - CIDER extends Emacs with support for interactive programming in Clojure. The features are centered around cider-mode, an Emacs minor-mode that complements clojure-mode. While clojure-mode supports editing Clojure source files, cider-mode adds support for interacting with a running Clojure process for compilation, debugging, definition and documentation lookup, running tests and so on.

You can safely think of CIDER as SLIME (a legendary Common Lisp programming environment) for Clojure - after all SLIME was the principle inspiration for CIDER to begin with. If you’re interested in some historical background you can check out my talk on the subject The Evolution of the Emacs tooling for Clojure.

Many people who are new to Lisps (and Emacs) really struggle with the concept of “interactive programming” and are often asking what’s the easiest (and fastest) way to “grok” (understand) it.

While CIDER has an extensive manual and a section on interactive programming there, it seems for most people that’s not enough to get a clear understanding of interactive programming fundamentals and appreciate its advantages.

I always felt what CIDER needed were more video tutorials on the subject, but for one reason or another I never found the time to produce any. In the past this amazing intro to SLIME really changed my perception of SLIME and got me from 0 to 80 in like one hour. I wanted to do the same for CIDER users! And I accidentally did this in a way last year - at a FP conference I was attending to present CIDER, one of the speakers dropped out, and I was invited to fill in for them with a hands-on session on CIDER. It was officially named Deep Dive into CIDER, but probably “Intro to CIDER” would have been a more appropriate name, and it’s likely the best video introduction to CIDER around today. It’s certainly not my finest piece of work, and I definitely have to revisit the idea for proper high-quality tutorials in the future, but it’s better than nothing. I hope at least some of you would find it useful!

You might also find some of the additional CIDER resources mentioned in the manual helpful.


-1:-- Into to CIDER (Post)--L0--C0--December 31, 2017 08:57 AM

(or emacs: Using digits to select company-mode candidates

I'd like to share a customization of company-mode that I've been using for a while. I refined it just recently, I'll explain below how.

Basic setting

(setq company-show-numbers t)

Now, numbers are shown next to the candidates, although they don't do anything yet:


Add some bindings

(let ((map company-active-map))
   (lambda (x)
     (define-key map (format "%d" x) 'ora-company-number))
   (number-sequence 0 9))
  (define-key map " " (lambda ()
                        (self-insert-command 1)))
  (define-key map (kbd "<return>") nil))

Besides binding 0..9 to complete their corresponding candidate, it also un-binds RET and binds SPC to close the company popup.

Actual code

(defun ora-company-number ()
  "Forward to `company-complete-number'.

Unless the number is potentially part of the candidate.
In that case, insert the number."
  (let* ((k (this-command-keys))
         (re (concat "^" company-prefix k)))
    (if (cl-find-if (lambda (s) (string-match re s))
        (self-insert-command 1)
      (company-complete-number (string-to-number k)))))

Initially, I would just bind company-complete-number. The problem with that was that if my candidate list was ("var0" "var1" "var2"), then entering 1 means:

  • select the first candidate (i.e. "var0"), instead of:
  • insert "1", resulting in "var1", i.e. the second candidate.

My customization will now check company-candidates—the list of possible completions—for the above mentioned conflict. And if it's detected, the key pressed will be inserted instead of being used to select a candidate.


Looking at git-log, I've been using company-complete-number for at least 3 years now. It's quite useful, and now also more seamless, since I don't have to type e.g. C-q 2 any more. In any case, thanks to the author and the contributors of company-mode. Merry Christmas and happy hacking in the New Year!

-1:-- Using digits to select company-mode candidates (Post)--L0--C0--December 26, 2017 11:00 PM