Irreal: Org Mode Cookbook Revisited

Way back in 2014, I posted about Eric Neilsen’s excellent Emacs org-mode examples and cookbook. I recently came across a reference to it and was reminded what a great resource it is. It’s easy to browse through and just read one or two entries when you have time. In skimming through it, I learned—or perhaps relearned—how to insert in-line calculations in a document.

As I wrote in the original post, Neilsen is a researcher and his cookbook is oriented at using Org mode to produce documents of various types. Still, that covers a lot of territory and there are many good examples of powerful Org mode use cases in it. The Document has moved or, really, taken up a second residence. It was originally hosted at Fermilab, where Neilsen works, and it’s still there but it’s also available at his own site. The two documents are identical so it doesn’t matter if you use the new link or the original one pointing to FNAL.

If you’re an Org user, especially if you use Org to produce documents, you should take a look at Neilsen’s cookbook and bookmark it for future use.

Update [2018-01-16 Tue 16:18]: Revistited → Revisited

-1:-- Org Mode Cookbook Revisited (Post jcs)--L0--C0--January 16, 2018 05:05 PM

Irreal: When Emacs Users Lend Their Computer

This is amusing and most developers—Emacs users or not—have probably experienced something similar. I find, though, that it’s the opposite that happens more often. I’ll be using a layperson’s computer and press Caps Lock expecting Ctrl. The expected hilarity ensues.

-1:-- When Emacs Users Lend Their Computer (Post jcs)--L0--C0--January 15, 2018 05:57 PM

sachachua: 2018-01-15 Emacs news

Links from, /r/orgmode, /r/spacemacs, Hacker News,, YouTube, the changes to the Emacs NEWS file, and emacs-devel.

-1:-- 2018-01-15 Emacs news (Post Sacha Chua)--L0--C0--January 15, 2018 04:51 PM

Marcin Borkowski: Counting LaTeX commands in a bunch of files

I hope that I want bore anyone to death with blog posts related to the journal I’m working for, but here’s another story about my experiences with that. I am currently writing a manual for authors wanting to prepare a paper for Wiadomości Matematyczne. We accept LaTeX files, of course, but we have our own LaTeX class (not yet public), and adapting what others wrote (usually using article) is sometimes a lot of work. Having the authors follow our guidelines could make that slightly less work, which is something I’d be quite happy with. (Of course, making a bunch of university mathematicians do something reasonable would be an achievement in itself.) When I presented (the current version of) the manual to my colleagues in the editorial board, we agreed that nobody will read it anyway. And then I had an idea of preparing a TL;DR version, just a few sentences, where I could mention the one thing I want to get across: dear authors, please do not do anything fancy, just stick with plain ol’ LaTeX. And one component of that message could be a list of LaTeX commands people should stick to. (If you have never worked for a journal or somewhere where you get to look at other people’s LaTeX files, you probably have no idea about what they are capable of doing.) So here I am, having 200+ LaTeX files (there are twice as many, but I had only about 200 on my current laptop), meticulously converted to our template (which means our class and our local customs, like special commands for various dashes or avoiding colons at all costs), and I want to prepare a list of LaTeX commands used throughout together with the information about the frequency of using them.
-1:-- Counting LaTeX commands in a bunch of files (Post)--L0--C0--January 15, 2018 05:08 AM

Phil Hagelberg: in which the cost of structured data is reduced

Last year I got the wonderful opportunity to attend RacketCon as it was hosted only 30 minutes away from my home. The two-day conference had a number of great talks on the first day, but what really impressed me was the fact that the entire second day was spent focusing on contribution. The day started out with a few 15- to 20-minute talks about how to contribute to a specific codebase (including that of Racket itself), and after that people just split off into groups focused around specific codebases. Each table had maintainers helping guide other folks towards how to work with the codebase and construct effective patch submissions.

lensmen chronicles

I came away from the conference with a great sense of appreciation for how friendly and welcoming the Racket community is, and how great Racket is as a swiss-army-knife type tool for quick tasks. (Not that it's unsuitable for large projects, but I don't have the opportunity to start any new large projects very frequently.)

The other day I wanted to generate colored maps of the world by categorizing countries interactively, and Racket seemed like it would fit the bill nicely. The job is simple: show an image of the world with one country selected; when a key is pressed, categorize that country, then show the map again with all categorized countries colored, and continue with the next country selected.

I have yet to see a language/framework more accessible and straightforward out of the box for drawing1. Here's the entry point which sets up state and then constructs a canvas that handles key input and display:

(define (main path)
  (let ([frame (new frame% [label "World color"])]
        [categorizations (box '())]
        [doc (call-with-input-file path read-xml/document)])
    (new (class canvas%
           (define/override (on-char event)
             (handle-key this categorizations (send event get-key-code)))
         [parent frame]
         [paint-callback (draw doc categorizations)])
    (send frame show #t)))

While the class system is not one of my favorite things about Racket (most newer code seems to avoid it in favor of generic interfaces in the rare case that polymorphism is truly called for), the fact that classes can be constructed in a light-weight, anonymous way makes it much less onerous than it could be. This code sets up all mutable state in a box which you use in the way you'd use a ref in ML or Clojure: a mutable wrapper around an immutable data structure.

The world map I'm using is an SVG of the Robinson projection from Wikipedia. If you look closely there's a call to bind doc that calls call-with-input-file with read-xml/document which loads up the whole map file's SVG; just about as easily as you could ask for.

The data you get back from read-xml/document is in fact a document struct, which contains an element struct containing attribute structs and lists of more element structs. All very sensible, but maybe not what you would expect in other dynamic languages like Clojure or Lua where free-form maps reign supreme. Racket really wants structure to be known up-front when possible, which is one of the things that help it produce helpful error messages when things go wrong.

Here's how we handle keyboard input; we're displaying a map with one country highlighted, and key here tells us what the user pressed to categorize the highlighted country. If that key is in the categories hash then we put it into categorizations.

(define categories #hash((select . "eeeeff")
                         (#\1 . "993322")
                         (#\2 . "229911")
                         (#\3 . "ABCD31")
                         (#\4 . "91FF55")
                         (#\5 . "2439DF")))

(define (handle-key canvas categorizations key)
  (cond [(equal? #\backspace key) ; undo
         (swap! categorizations cdr)]
        [(member key (dict-keys categories)) ; categorize
         (swap! categorizations (curry cons key))]
        [(equal? #\space key) ; print state
         (display (unbox categorizations))])
  (send canvas refresh))

Finally once we have a list of categorizations, we need to apply it to the map document and display. We apply a fold reduction over the XML document struct and the list of country categorizations (plus 'select for the country that's selected to be categorized next) to get back a "modified" document struct where the proper elements have the style attributes applied for the given categorization, then we turn it into an image and hand it to draw-pict:

(define (update original-doc categorizations)
  (for/fold ([doc original-doc])
            ([category (cons 'select (unbox categorizations))]
             [n (in-range (length (unbox categorizations)) 0 -1)])
    (set-style doc n (style-for category))))

(define ((draw doc categorizations) _ context)
  (let* ([newdoc (update doc categorizations)]
         [xml (call-with-output-string (curry write-xml newdoc))])
    (draw-pict (call-with-input-string xml svg-port->pict) context 0 0)))

The problem is in that pesky set-style function. All it has to do is reach deep down into the document struct to find the nth path element (the one associated with a given country), and change its 'style attribute. It ought to be a simple task. Unfortunately this function ends up being anything but simple:

;; you don't need to understand this; just grasp how huge/awkward it is
(define (set-style doc n new-style)
  (let* ([root (document-element doc)]
         [g (list-ref (element-content root) 8)]
         [paths (element-content g)]
         [path (first (drop (filter element? paths) n))]
         [path-num (list-index (curry eq? path) paths)]
         [style-index (list-index (lambda (x) (eq? 'style (attribute-name x)))
                                  (element-attributes path))]
         [attr (list-ref (element-attributes path) style-index)]
         [new-attr (make-attribute (source-start attr)
                                   (source-stop attr)
                                   (attribute-name attr)
         [new-path (make-element (source-start path)
                                 (source-stop path)
                                 (element-name path)
                                 (list-set (element-attributes path)
                                           style-index new-attr)
                                 (element-content path))]
         [new-g (make-element (source-start g)
                              (source-stop g)
                              (element-name g)
                              (element-attributes g)
                              (list-set paths path-num new-path))]
         [root-contents (list-set (element-content root) 8 new-g)])
    (make-document (document-prolog doc)
                   (make-element (source-start root)
                                 (source-stop root)
                                 (element-name root)
                                 (element-attributes root)
                   (document-misc doc))))

The reason for this is that while structs are immutable, they don't support functional updates. Whenever you're working with immutable data structures, you want to be able to say "give me a new version of this data, but with field x replaced by the value of (f (lookup x))". Racket can do this with dictionaries but not with structs2. If you want a modified version you have to create a fresh one3.

first lensman

When I brought this up in the #racket channel on Freenode, I was helpfully pointed to the 3rd-party Lens library. Lenses are a general-purpose way of composing arbitrarily nested lookups and updates. Unfortunately at this time there's a flaw preventing them from working with xml structs, so it seemed I was out of luck.

But then I was pointed to X-expressions as an alternative to structs. The xml->xexpr function turns the structs into a deeply-nested list tree with symbols and strings in it. The tag is the first item in the list, followed by an associative list of attributes, then the element's children. While this gives you fewer up-front guarantees about the structure of the data, it does work around the lens issue.

For this to work, we need to compose a new lens based on the "path" we want to use to drill down into the nth country and its style attribute. The lens-compose function lets us do that. Note that the order here might be backwards from what you'd expect; it works deepest-first (the way compose works for functions). Also note that defining one lens gives us the ability to both get nested values (with lens-view) and update them.

(define (style-lens n)
  (lens-compose (dict-ref-lens 'style)
                (list-ref-lens (add1 (* n 2)))
                (list-ref-lens 10)))

Our <path> XML elements are under the 10th item of the root xexpr, (hence the list-ref-lens with 10) and they are interspersed with whitespace, so we have to double n to find the <path> we want. The second-lens call gets us to that element's attribute alist, and dict-ref-lens lets us zoom in on the 'style key out of that alist.

Once we have our lens, it's just a matter of replacing set-style with a call to lens-set in our update function we had above, and then we're off:

(define (update doc categorizations)
  (for/fold ([d doc])
            ([category (cons 'select (unbox categorizations))]
             [n (in-range (length (unbox categorizations)) 0 -1)])
    (lens-set (style-lens n) d (list (style-for category)))))
second stage lensman

Often times the trade-off between freeform maps/hashes vs structured data feels like one of convenience vs long-term maintainability. While it's unfortunate that they can't be used with the xml structs4, lenses provide a way to get the best of both worlds, at least in some situations.

The final version of the code clocks in at 51 lines and is is available on GitLab.

[1] The LÖVE framework is the closest thing, but it doesn't have the same support for images as a first-class data type that works in the repl.

[2] If you're defining your own structs, you can make them implement the dictionary interface, but with the xml library we have to use the struct definitions provided us.

[3] Technically you can use the struct-copy function, but it's not that much better. The field names must be provided at compile-time, and it's no more efficient as it copies the entire contents instead of sharing internal structure. And it still doesn't have an API that allows you to express the new value as a function of the old value.

[4] Lenses work with most regular structs as long as they are transparent and don't use subtyping. Subtyping and opaque structs are generally considered bad form in modern Racket, but you do find older libraries that use them from time to time.

-1:-- in which the cost of structured data is reduced (Post Phil Hagelberg)--L0--C0--January 12, 2018 07:53 PM

Timo Geusch: Emacs within Emacs within Emacs…

A quick follow-up to my last post where I was experimenting with running emacsclient from an ansi-term running in the main Emacs. Interestingly, you can run Emacs in text mode within an ansi-term, just not emacsclient: Yes, the whole thing Read More

The post Emacs within Emacs within Emacs… appeared first on The Lone C++ Coder's Blog.

-1:-- Emacs within Emacs within Emacs… (Post Timo Geusch)--L0--C0--January 10, 2018 05:14 AM

sachachua: 2018-01-09 Emacs news

Links from, /r/orgmode, /r/spacemacs, Hacker News,, YouTube, the changes to the Emacs NEWS file, and emacs-devel.

-1:-- 2018-01-09 Emacs news (Post Sacha Chua)--L0--C0--January 09, 2018 03:43 PM

emacspeak: Updating Voxin TTS Server To Avoid A Possible ALSA Bug

Updating Voxin TTS Server To Avoid A Possible ALSA Bug

1 Summary

I recently updated to a new Linux laptop running the latest Debian
(Rodete). The upgrade went smoothly, but when I started using the
machine, I found that the Emacspeak TTS server for Voxin (Outloud)
crashed consistently; here, consistently equated to crashing on short
utterances which made typing or navigating by character an extremely
frustrating experience.

I fixed the issue by creating a work-around in the TTS server
— if you run into this issue, make sure to update and rebuild from GitHub; alternatively, you'll find an updated in the servers/linux-outloud/lib/ directory after a
git update that you can copy over to your servers/linux-outloud

2 What Was Crashing

I use a DMIX plugin as the default device — and have many ALSA
virtual devices that are defined in terms of this device — see my
asoundrc. With this configuration, writing to the ALSA device was
raising an EPIPE error — normally this error indicates a buffer
underrun — that's when ALSA is starved of audio data. But in many
of these cases, the ALSA device was still in a RUNNING rather than
an XRUN state — this caused the Emacspeak server to
abort. Curiously, this happened only sporadically — and from my
experimentation only happened when there were multiple streams of
audio active on the machine.
A few Google searches showed threads on the alsa/kernel devel lists
that indicated that this bug was present in the case of DMIX devices
— it was hard to tell if the patch that was submitted on the
alsa-devel list had made it into my installation of Debian.

3 Fixing The Problem

My original implementation of function xrun had been cloned from
aplay.c about 15+ years ago — looking at the newest aplay
implementation, little to nothing had changed there. I finally worked
around the issue by adding a call to


whenever ALSA raised an EPIPE error during write — with the ALSA
device state in a RUNNING rather than an XRUN state. This
appears to fix the issue.

-1:-- Updating Voxin TTS Server To  Avoid A Possible ALSA Bug (Post T. V. Raman ( 08, 2018 06:06 PM

Marcin Borkowski: A small editing tool for work with AMSrefs

As I mentioned many times, I often edit LaTeX files written by someone else for a journal. One thing which is notoriously difficult to get right when writing academic papers is bibliographies. At Wiadomości Matematyczne, we use AMSrefs, which is really nice (even if it has some rough edges here and there). (BTW, BibLaTeX was not as mature as it is today when we settled on our tool; also, AMSrefs might be a tad easier to customize, though I’m not sure about that anymore…) One of the commands AMSrefs offers is \citelist. Instead of writing things like papers \cite{1}, \cite{2} and~\cite{3}, you write papers \citelist{\cite{1}\cite{2}\cite{3}}, and AMSrefs sorts these entries and compresses runs into ranges (like in [1-3]). The only problem is that most authors have no idea that this exists, and we often have to convert “manual” lists of citations into \citelist‘s. Well, as usual, Emacs to the rescue.
-1:-- A small editing tool for work with AMSrefs (Post)--L0--C0--January 07, 2018 08:22 PM

Rubén Berenguel: 2017: Year in Review

I’m trying to make these posts a tradition (even if a few days late). I thought 2016 had been a really weird and fun year, but 2017 has beaten it easily. And I only hope 2018 will be even better in every way. For the record, when I say we, it means Laia and me unless explicitly changed.

Beware, some of the links are affiliate links. I only recommend what I have and like though, get at your own risk :)


Everything work related has gone up. More work, better work, more interesting work. Good, isn’t it?

As far as my consulting job in London, the most relevant parts would be:

  • Lead a rewrite and refactor of the adserver (Golang) to improve speed and reliability.
  • Migrated a batch job from Apache Pig to Apache Spark to be able to cope with larger amounts of data from third parties (now we process 2x the data with 1/10th of the cost).
  • Planned an upgrade of our Kafka cluster from Kafka 0.8.2 to Kafka 0.10.1, which we could not execute as well as planned because the cluster went down. Helped save that day together with the director of engineering when that happened.
  • Was part of the hiring team, we’ve had one successful hire this year (passed probation, is an excellent team member and loves weird tech). Hopefully we enlarge our team much more in the coming year.
  • Put a real time service in Akka in production, serving and evaluating models generated by a Spark batch job.
We also moved offices, now we have a free barista “on premises”. Free, good quality coffee is the best that can be done to improve my productivity.

In April I got new business cards (designed by Laia, you can get your own design if you want, contact her):

I kept on helping a company with its SEO efforts, and as usual patience works. Search traffic has improved 30% year-to-year, so I’m pretty happy with it. Let’s see what the new year brings.

I became technical advisor of a local startup (an old friend, PhD in maths is a founder and works there as data scientist/engineer/whatever), trying to bring data insights to small and medium retailers. I help them with technology decisions where I have more hand-to-hand experience, or know where the industry is moving.


Traveling up and down as usual (2-3 weeks in London, then l’Arboç, then maybe somewhere else…) sprinkled with some conferences and holidays.

Regarding life, the universe and everything, what I’ve done and where I’ve been
  • In February we visited Hay-on-Wye again, for my birthday
  • In March I convinced Holden Karau (was easy: she loves talking about Spark :D) to be one of our great keynote speakers at PyData Barcelona 2017
  • In late March we visited Edinburgh and Glasgow
  • In early May I attended PyData London to be able to prepare better for ours. Met some great people there.
  • A bit later in May I visited Lisbon for LX Scala, thanks Jorge and the rest for the great work
  • And at the end of May, we held PyData Barcelona 2017, where I was one of the organisers. We had more than 300 attendees, enjoying a lot of interesting talks. Thanks to all attendees and the rest of the organising committee... We made a hell of a great conference
  • Mid-June, I gave my first meetup presentation, Snakes and Ladders (about typing in Python as compared with Scala) in the PyBCN meetup
  • In late June, we visited Cheddar and Wells
  • In September I visited Penrith for the awesome (thanks Jon) Scala World 2017. Looking forward to the 2019 edition.
  • In early October we visited San Sebastian for the Python San Sebastian 2017 conference. We ate terribly well there (we can recommend Bodegón Alejandro as one of the best places to eat anywhere in the world now)
  • Mid-October we visited Bletchley Park. Nice.
  • In late October we (Ernest Fontich and myself) submitted our paper Normal forms and Sternberg conjugation theorems for infinite dimensional coupled map lattice. Now we need to wait.
  • In November we visited Brussels (Ghent and Brugge too), and took an unofficial tour of the European Council with a friend who works there.
  • In December I attended for the second time Scala Exchange, and the extra community day (excellent tutorials by Heiko Seeberger and Travis Brown). Was even better than last year (maybe because I knew more people?) and I already got my tickets for next year.
  • In December we attended a wine and cheese pairing (with Francesc, our man in Brussels, and Laia) at Parés Baltà. They follow biodynamic principles (no herbicides, as natural as they can get, etc) and offer added sulfite free wines, too. They are excellent: neither Laia nor I drink, and we bought 4 bottles of their wines and cavas.
Last year I decided to start contributing to open source software this year, and I managed to become a contributor to the following projects:

I wanted to contribute to the Go compiler code base, but didn’t find an interesting issue. Maybe this year.


This year I didn’t push courses/learning as strongly as last year... Or at least this is what I thought before writing this post.

  • In August I took Apache Kafka Series - Learn Apache Kafka for Beginners, with the rest of the courses in the series waiting for me having more time available.
  • In September I tried to learn knitting and lace, but it does not seem to suit me.
  • In September I enrolled in a weekly Taichi and Qi Gong course by Mei Quan Tai Chi. Will repeat for the next term
  • In December I started learning about Cardistry


I have read slightly less than last year (36 books vs 44 last year), and the main victim has been fiction. Haven’t read much, and the best... has been the re-read of Zelazny’s Chronicles of Amber. Still excellent. I have enlarged my collection of Zelazny books, now I have more than 30.

As far as non-fiction goes, I have specially enjoyed:
  • Essentialism: given how many things I do at once, this book felt quite refreshing
  • Rich dad, poor dad: Nothing too fancy, just common sense. Invest on having assets (money-generating items) instead of liabilities (money-sucking items, like the house you live in)
  • 10% Entrepreneur: Links very well with the above. Being a 10% entrepreneur is a natural way to invest in your assets.
  • The Checklist Manifesto: Checklists are a way to automate your life. I have read several books around this concept (“creating and tweaking systems”, as a concept) and it resonates with me. If I can automate (even if I’m the machine), it’s a neat win.
  • The Subtle Art of Not Giving a F*ck: Recommended read. For no particular reason. I’ve heard that the audiobook version is great, too.


This year I have listened mostly to Sonata Arctica. We attended their concert in Glasgow (March) and it was awesome, they are really good live. This was a build up for KISS at the O2 in London (May) which was totally terrific. And followed by Bat Out of Hell (opening day!) in London. It was great, and probably the closest I’ll ever be to listening Meat Loaf live. Lately I’ve been listening to a very short playlist I have by Loquillo, and also Anachronist.

We have also attended a performance by Penn and Teller (excellent), and IIRC we have also watched just one screening: The Last Jedi (meh, but Laia liked it).


This year I have gotten hold of a lot of gadgets. I mention only the terribly useful or interesting
  • From last year, iPhone 7 “small”. Not happy with it. Battery life sucks big time, I got a external Mophie battery for it.
  • Mid-year: Apple Watch Series 2. Pretty cool, and more useful than I expected.
  • Late this year: AirPods. THEY ARE AWESOME
  • Laptop foldable cooling support. While taking the deep learning course my Air got very hot, and I needed some way to get it as cool as possible.
  • Nutribullet. My morning driver is banana, Kit Kat chunky, milk, golden flax seed, guarana.
  • Icebreaker merino underwear. I sweat a bit, and get easily chaffed on the side of my legs (where it contacts my underwear). Not any more: not only is wool better at sweat-handling, but the fabric also feels better on the skin. And not, does not feel hot in the summer.
  • Double Edge Shaving. I hated shaving (and actually just kept my beard trimmed so it was never a real beard or a clean shave...) and this razor (not this one specifically, safety razors are pretty much all the same) has changed that. Now I shave regularly and enjoy it a lot (together with this soap and this after shave balm)
  • Chilly bottles. They work really well to keep drinks cold or hot. I’ll be getting their food container soon.
  • Plenty of lightning cables. You can never have enough of these. I also got this great multi-device charger, ideal for traveling.
  • Compact wallet. I’ve been shown the ads so many times I finally moved from my Tyvek wallets to one from Bellroy. It is very good.
  • Book darts. Small bookmarks that don’t get lost, look great and can double as line markers. Also, they don’t add bulk to a book, so you can have many in the same book without damaging it at all. They are great, I’m getting a second tin in my next Amazon order of stuff.
  • Two frames from an artist I saw showcased in our previous office (they had exhibits downstairs). Blue Plaque Doors and Hatchard’s, by Luke Adam Hawker.
On the fun side, I also have a spiral didgeridoo, a proper Scottish bagpipes, a Lego Mindstorms I have not played with yet :( and an Arduboy. Oh, and a Raspberry Pi Zero Wireless.

-1:-- 2017: Year in Review (Post Rubén Berenguel ( 06, 2018 02:31 PM

Alex Schroeder: Gopher Mode

Yeah, I’ve been working on Gopher stuff over the holidays.

  1. a Gopher server wrapper around Oddmuse wiki (and this site is running it, see gopher://
  2. a proposal of a new item type to write to a Gopher server with examples based on netcat, i.e. nc
  3. improvements to the Emacs Gopher client with support for HTML and the new item type (see this branch on GitHub)

Isn’t that amazing.


-1:-- Gopher Mode (Post)--L0--C0--January 03, 2018 08:05 AM

Emacs Redux: A Crazy Productivity Boost: Remapping Return to Control (2017 Edition)

Back in 2013 I wrote about my favourite productivity boost in Emacs, namely remapping Return to Control, which in combination with the classic remapping of CapsLock to Control makes it really easy to get a grip on Emacs’s obsession with the Control key.

In the original article I suggested to OS X (now macOS) users the tool KeyRemap4MacBook, which was eventually renamed to Karabiner. Unfortunately this tool stopped working in macOS Sierra, due to some internal kernel architecture changes.

That was pretty painful for me as it meant that on my old MacBook I couldn’t upgrade to the newest macOS editions and on my new MacBook I couldn’t type properly in Emacs (as it came with Sierra pre-installed)… Bummer!

Fortunately 2 years later this is finally solved - the Karabiner team rewrote Karabiner from scratch for newer macOS releases and recently added my dream feature to the new Karabiner Elements. Unlike in the past though, this remapping is not actually bundled with Karabiner by default, so you have to download and enable it manually from here.

That’s actually even better than what I had originally suggested, as here it’s also suggested to use CapsLock with a dual purpose as well - Control when held down and Escape otherwise. I have no idea how this never came to my mind, but it’s truly epic! A crazy productivity boost just got even crazier!


-1:-- A Crazy Productivity Boost: Remapping Return to Control (2017 Edition) (Post)--L0--C0--December 31, 2017 09:22 AM

Emacs Redux: Into to CIDER

CIDER is a popular Clojure programming environment for Emacs.

In a nutshell - CIDER extends Emacs with support for interactive programming in Clojure. The features are centered around cider-mode, an Emacs minor-mode that complements clojure-mode. While clojure-mode supports editing Clojure source files, cider-mode adds support for interacting with a running Clojure process for compilation, debugging, definition and documentation lookup, running tests and so on.

You can safely think of CIDER as SLIME (a legendary Common Lisp programming environment) for Clojure - after all SLIME was the principle inspiration for CIDER to begin with. If you’re interested in some historical background you can check out my talk on the subject The Evolution of the Emacs tooling for Clojure.

Many people who are new to Lisps (and Emacs) really struggle with the concept of “interactive programming” and are often asking what’s the easiest (and fastest) way to “grok” (understand) it.

While CIDER has an extensive manual and a section on interactive programming there, it seems for most people that’s not enough to get a clear understanding of interactive programming fundamentals and appreciate its advantages.

I always felt what CIDER needed were more video tutorials on the subject, but for one reason or another I never found the time to produce any. In the past this amazing intro to SLIME really changed my perception of SLIME and got me from 0 to 80 in like one hour. I wanted to do the same for CIDER users! And I accidentally did this in a way last year - at a FP conference I was attending to present CIDER, one of the speakers dropped out, and I was invited to fill in for them with a hands-on session on CIDER. It was officially named Deep Dive into CIDER, but probably “Intro to CIDER” would have been a more appropriate name, and it’s likely the best video introduction to CIDER around today. It’s certainly not my finest piece of work, and I definitely have to revisit the idea for proper high-quality tutorials in the future, but it’s better than nothing. I hope at least some of you would find it useful!

You might also find some of the additional CIDER resources mentioned in the manual helpful.


-1:-- Into to CIDER (Post)--L0--C0--December 31, 2017 08:57 AM

Alex Schroeder: Fonts

What fonts should I use on my new laptop?

On the Apple PowerBook Pro I used Fira Code. I liked those ligatures for Javascript!

But now I’m thinking perhaps Noto is better? Specially since there are packages for it: sudo apt install fonts-noto fonts-noto-color-emoji and you’re good to go. Except that Emacs doesn’t show me any orange flames when I use 🔥. Sad!

Still, in my Emacs config now: (set-face-attribute 'default nil :family "Noto Mono" :height 140).

Firefox Purebrowser renders the flame in blue and uses a lineheight of an estimated 600%. This looks very ugly.

installing fonts-symbola gives me the black flames back. Not cool, but also not bad.


-1:-- Fonts (Post)--L0--C0--December 27, 2017 07:43 PM

(or emacs: Using digits to select company-mode candidates

I'd like to share a customization of company-mode that I've been using for a while. I refined it just recently, I'll explain below how.

Basic setting

(setq company-show-numbers t)

Now, numbers are shown next to the candidates, although they don't do anything yet:


Add some bindings

(let ((map company-active-map))
   (lambda (x)
     (define-key map (format "%d" x) 'ora-company-number))
   (number-sequence 0 9))
  (define-key map " " (lambda ()
                        (self-insert-command 1)))
  (define-key map (kbd "<return>") nil))

Besides binding 0..9 to complete their corresponding candidate, it also un-binds RET and binds SPC to close the company popup.

Actual code

(defun ora-company-number ()
  "Forward to `company-complete-number'.

Unless the number is potentially part of the candidate.
In that case, insert the number."
  (let* ((k (this-command-keys))
         (re (concat "^" company-prefix k)))
    (if (cl-find-if (lambda (s) (string-match re s))
        (self-insert-command 1)
      (company-complete-number (string-to-number k)))))

Initially, I would just bind company-complete-number. The problem with that was that if my candidate list was ("var0" "var1" "var2"), then entering 1 means:

  • select the first candidate (i.e. "var0"), instead of:
  • insert "1", resulting in "var1", i.e. the second candidate.

My customization will now check company-candidates—the list of possible completions—for the above mentioned conflict. And if it's detected, the key pressed will be inserted instead of being used to select a candidate.


Looking at git-log, I've been using company-complete-number for at least 3 years now. It's quite useful, and now also more seamless, since I don't have to type e.g. C-q 2 any more. In any case, thanks to the author and the contributors of company-mode. Merry Christmas and happy hacking in the New Year!

-1:-- Using digits to select company-mode candidates (Post)--L0--C0--December 26, 2017 11:00 PM

Manuel Uberti: A year of functional programming

First things first: the title is a lie.

If you happen to be one of my passionate readers, you may recall I started working with Clojure on April 1. So yes, not every month of the year has been devoted to functional programming. I just needed something bold to pull you in, sorry.

Now, how does it feel having worked with Clojure for almost a year?

Here at 7bridges we had our fair share of projects. The open source ones are just a selected few: clj-odbp, a driver for OrientDB binary protocol; carter, an SPA to show how our driver works; remys, a little tool to interact with MySQL databases via REST APIs. I also had the chance to play with ArangoDB recently, and there were no problems building a sample project to understand its APIs.

At home, boodle was born to strengthen my ever-growing knowledge and do something useful for the family.

When I started in the new office, the switch from professional Java to professional Clojure was a bit overwhelming. New libraries, new tools, new patterns, new ways of solving the same old problems, new problems to approach with a totally different mindset. It all seemed too much.

Then, something clicked.

Having the same language on both client- and server-side helped me figure out the matters at hand with a set of ideas I could easily reuse. Once I understood the problem, I could look for the steps to solve it. Each step required a data structure and the function to handle this data structure. The first time I used reduce-kv because it was the most natural choice left a great smile on my face.

There is still much to learn, though. Due to lack of experience with JavaScript, my ClojureScript-fu needs to improve. I have come to appreciate unit testing, but it’s time to put this love at work on my .cljs files too. I also definitely want to know more about Clojure web applications security and performances.

2017 has been a great year to be a functional programmer. My recent liaison with Haskell is directing me more and more on my way. The functional programming way.

-1:-- A year of functional programming (Post)--L0--C0--December 21, 2017 12:00 AM

Chris Wellons: What's in an Emacs Lambda

There was recently some interesting discussion about correctly using backquotes to express a mixture of data and code. Since lambda expressions seem to evaluate to themselves, what’s the difference? For example, an association list of operations:

'((add . (lambda (a b) (+ a b)))
  (sub . (lambda (a b) (- a b)))
  (mul . (lambda (a b) (* a b)))
  (div . (lambda (a b) (/ a b))))

It looks like it would work, and indeed it does work in this case. However, there are good reasons to actually evaluate those lambda expressions. Eventually invoking the lambda expressions in the quoted form above are equivalent to using eval. So, instead, prefer the backquote form:

`((add . ,(lambda (a b) (+ a b)))
  (sub . ,(lambda (a b) (- a b)))
  (mul . ,(lambda (a b) (* a b)))
  (div . ,(lambda (a b) (/ a b))))

There are a lot of interesting things to say about this, but let’s first reduce it to two very simple cases:

(lambda (x) x)

'(lambda (x) x)

What’s the difference between these two forms? The first is a lambda expression, and it evaluates to a function object. The other is a quoted list that looks like a lambda expression, and it evaluates to a list — a piece of data.

A naive evaluation of these expressions in *scratch* (C-x C-e) suggests they are are identical, and so it would seem that quoting a lambda expression doesn’t really matter:

(lambda (x) x)
;; => (lambda (x) x)

'(lambda (x) x)
;; => (lambda (x) x)

However, there are two common situations where this is not the case: byte compilation and lexical scope.

Lambda under byte compilation

It’s a little trickier to evaluate these forms byte compiled in the scratch buffer since that doesn’t happen automatically. But if it did, it would look like this:

;;; -*- lexical-binding: nil; -*-

(lambda (x) x)
;; => #[(x) "\010\207" [x] 1]

'(lambda (x) x)
;; => (lambda (x) x)

The #[...] is the syntax for a byte-code function object. As discussed in detail in my byte-code internals article, it’s a special vector object that contains byte-code, and other metadata, for evaluation by Emacs’ virtual stack machine. Elisp is one of very few languages with readable function objects, and this feature is core to its ahead-of-time byte compilation.

The quote, by definition, prevents evaluation, and so inhibits byte compilation of the lambda expression. It’s vital that the byte compiler does not try to guess the programmer’s intent and compile the expression anyway, since that would interfere with lists that just so happen to look like lambda expressions — i.e. any list containing the lambda symbol.

There are three reasons you want your lambda expressions to get byte compiled:

  • Byte-compiled functions are significantly faster. That’s the main purpose for byte compilation after all.

  • The compiler performs static checks, producing warnings and errors ahead of time. This lets you spot certain classes of problems before they occur. The static analysis is even better under lexical scope due to its tighter semantics.

  • Under lexical scope, byte-compiled closures may use less memory. More specifically, they won’t accidentally keep objects alive longer than necessary. I’ve never seen a name for this implementation issue, but I call it overcapturing. More on this later.

While it’s common for personal configurations to skip byte compilation, Elisp should still generally be written as if it were going to be byte compiled. General rule of thumb: Ensure your lambda expressions are actually evaluated.

Lambda in lexical scope

As I’ve stressed many times, you should always use lexical scope. There’s no practical disadvantage or trade-off involved. Just do it.

Once lexical scope is enabled, the two expressions diverge even without byte compilation:

;;; -*- lexical-binding: t; -*-

(lambda (x) x)
;; => (closure (t) (x) x)

'(lambda (x) x)
;; => (lambda (x) x)

Under lexical scope, lambda expressions evaluate to closures. Closures capture their lexical environment in their closure object — nothing in this particular case. It’s a type of function object, making it a valid first argument to funcall.

Since the quote prevents the second expression from being evaluated, semantically it evaluates to a list that just so happens to look like a (non-closure) function object. Invoking a data object as a function is like using eval — i.e. executing data as code. Everyone already knows eval should not be used lightly.

It’s a little more interesting to look at a closure that actually captures a variable, so here’s a definition for constantly, a higher-order function that returns a closure that accepts any number of arguments and returns a particular constant:

(defun constantly (x)
  (lambda (&rest _) x))

Without byte compiling it, here’s an example of its return value:

(constantly :foo)
;; => (closure ((x . :foo) t) (&rest _) x)

The environment has been captured as an association list (with a trailing t), and we can plainly see that the variable x is bound to the symbol :foo in this closure. Consider that we could manipulate this data structure (e.g. setcdr or setf) to change the binding of x for this closure. This is essentially how closures mutate their own environment. Moreover, closures from the same environment share structure, so such mutations are also shared. More on this later.

Semantically, closures are distinct objects (via eq), even if the variables they close over are bound to the same value. This is because they each have a distinct environment attached to them, even if in some invisible way.

(eq (constantly :foo) (constantly :foo))
;; => nil

Without byte compilation, this is true even when there’s no lexical environment to capture:

(defun dummy ()
  (lambda () t))

(eq (dummy) (dummy))
;; => nil

The byte compiler is smart, though. As an optimization, the same closure object is reused when possible, avoiding unnecessary work, including multiple object allocations. Though this is a bit of an abstraction leak. A function can (ab)use this to introspect whether it’s been byte compiled:

(defun have-i-been-compiled-p ()
  (let ((funcs (vector nil nil)))
    (dotimes (i 2)
      (setf (aref funcs i) (lambda ())))
    (eq (aref funcs 0) (aref funcs 1))))

;; => nil

(byte-compile 'have-i-been-compiled-p)

;; => t

The trick here is to evaluate the exact same non-capturing lambda expression twice, which requires a loop (or at least some sort of branch). Semantically we should think of these closures as being distinct objects, but, if we squint our eyes a bit, we can see the effects of the behind-the-scenes optimization.

Don’t actually do this in practice, of course. That’s what byte-code-function-p is for, which won’t rely on a subtle implementation detail.


I mentioned before that one of the potential gotchas of not byte compiling your lambda expressions is overcapturing closure variables in the interpreter.

To evaluate lisp code, Emacs has both an interpreter and a virtual machine. The interpreter evaluates code in list form: cons cells, numbers, symbols, etc. The byte compiler is like the interpreter, but instead of directly executing those forms, it emits byte-code that, when evaluated by the virtual machine, produces identical visible results to the interpreter — in theory.

What this means is that Emacs contains two different implementations of Emacs Lisp, one in the interpreter and one in the byte compiler. The Emacs developers have been maintaining and expanding these implementations side-by-side for decades. A pitfall to this approach is that the implementations can, and do, diverge in their behavior. We saw this above with that introspective function, and it comes up in practice with advice.

Another way they diverge is in closure variable capture. For example:

;;; -*- lexical-binding: t; -*-

(defun overcapture (x y)
  (when y
    (lambda () x)))

(overcapture :x :some-big-value)
;; => (closure ((y . :some-big-value) (x . :x) t) nil x)

Notice that the closure captured y even though it’s unnecessary. This is because the interpreter doesn’t, and shouldn’t, take the time to analyze the body of the lambda to determine which variables should be captured. That would need to happen at run-time each time the lambda is evaluated, which would make the interpreter much slower. Overcapturing can get pretty messy if macros are introducing their own hidden variables.

On the other hand, the byte compiler can do this analysis just once at compile-time. And it’s already doing the analysis as part of its job. It can avoid this problem easily:

(overcapture :x :some-big-value)
;; => #[0 "\300\207" [:x] 1]

It’s clear that :some-big-value isn’t present in the closure.

But… how does this work?

How byte compiled closures are constructed

Recall from the internals article that the four core elements of a byte-code function object are:

  1. Parameter specification
  2. Byte-code string (opcodes)
  3. Constants vector
  4. Maximum stack usage

While a closure seems like compiling a whole new function each time the lambda expression is evaluated, there’s actually not that much to it! Namely, the behavior of the function remains the same. Only the closed-over environment changes.

What this means is that closures produced by a common lambda expression can all share the same byte-code string (second element). Their bodies are identical, so they compile to the same byte-code. Where they differ are in their constants vector (third element), which gets filled out according to the closed over environment. It’s clear just from examining the outputs:

(constantly :a)
;; => #[128 "\300\207" [:a] 2]

(constantly :b)
;; => #[128 "\300\207" [:b] 2]

constantly has three of the four components of the closure in its own constant pool. Its job is to construct the constants vector, and then assemble the whole thing into a byte-code function object (#[...]). Here it is with M-x disassemble:

0       constant  make-byte-code
1       constant  128
2       constant  "\300\207"
4       constant  vector
5       stack-ref 4
6       call      1
7       constant  2
8       call      4
9       return

(Note: since byte compiler doesn’t produce perfectly optimal code, I’ve simplified it for this discussion.)

It pushes most of its constants on the stack. Then the stack-ref 5 (5) puts x on the stack. Then it calls vector to create the constants vector (6). Finally, it constructs the function object (#[...]) by calling make-byte-code (8).

Since this might be clearer, here’s the same thing expressed back in terms of Elisp:

(defun constantly (x)
  (make-byte-code 128 "\300\207" (vector x) 2))

To see the disassembly of the closure’s byte-code:

(disassemble (constantly :x))

The result isn’t very surprising:

0       constant  :x
1       return

Things get a little more interesting when mutation is involved. Consider this adder closure generator, which mutates its environment every time it’s called:

(defun adder ()
  (let ((total 0))
    (lambda () (cl-incf total))))

(let ((count (adder)))
  (funcall count)
  (funcall count)
  (funcall count))
;; => 3

;; => #[0 "\300\211\242T\240\207" [(0)] 2]

The adder essentially works like this:

(defun adder ()
  (make-byte-code 0 "\300\211\242T\240\207" (vector (list 0)) 2))

In theory, this closure could operate by mutating its constants vector directly. But that wouldn’t be much of a constants vector, now would it!? Instead, mutated variables are boxed inside a cons cell. Closures don’t share constant vectors, so the main reason for boxing is to share variables between closures from the same environment. That is, they have the same cons in each of their constant vectors.

There’s no equivalent Elisp for the closure in adder, so here’s the disassembly:

0       constant  (0)
1       dup
2       car-safe
3       add1
4       setcar
5       return

It puts two references to boxed integer on the stack (constant, dup), unboxes the top one (car-safe), increments that unboxed integer, stores it back in the box (setcar) via the bottom reference, leaving the incremented value behind to be returned.

This all gets a little more interesting when closures interact:

(defun fancy-adder ()
  (let ((total 0))
    `(:add ,(lambda () (cl-incf total))
      :set ,(lambda (v) (setf total v))
      :get ,(lambda () total))))

(let ((counter (fancy-adder)))
  (funcall (plist-get counter :set) 100)
  (funcall (plist-get counter :add))
  (funcall (plist-get counter :add))
  (funcall (plist-get counter :get)))
;; => 102

;; => (:add #[0 "\300\211\242T\240\207" [(0)] 2]
;;     :set #[257 "\300\001\240\207" [(0)] 3]
;;     :get #[0 "\300\242\207" [(0)] 1])

This is starting to resemble object oriented programming, with methods acting upon fields stored in a common, closed-over environment.

All three closures share a common variable, total. Since I didn’t use print-circle, this isn’t obvious from the last result, but each of those (0) conses are the same object. When one closure mutates the box, they all see the change. Here’s essentially how fancy-adder is transformed by the byte compiler:

(defun fancy-adder ()
  (let ((box (list 0)))
    (list :add (make-byte-code 0 "\300\211\242T\240\207" (vector box) 2)
          :set (make-byte-code 257 "\300\001\240\207" (vector box) 3)
          :get (make-byte-code 0 "\300\242\207" (vector box) 1))))

The backquote in the original fancy-adder brings this article full circle. This final example wouldn’t work correctly if those lambdas weren’t evaluated properly.

-1:-- What's in an Emacs Lambda (Post)--L0--C0--December 14, 2017 06:18 PM

Timo Geusch: Running Emacs from inside Emacs

I’m experimenting with screen recordings at the moment and just out of curiosity decided to see if I can load and edit a text file inside the main Emacs process from inside an ansi-term using emacsclient. Spoiler alert – yes, Read More

The post Running Emacs from inside Emacs appeared first on The Lone C++ Coder's Blog.

-1:-- Running Emacs from inside Emacs (Post Timo Geusch)--L0--C0--December 14, 2017 05:44 AM

(or emacs: Comparison of transaction fees on Patreon and similar services

On December 7, Patreon made an announcement about the change in their transaction fee structure. The results as of December 10 speak for themselves:

December 2017 summary: -$29 in pledges, -6 patrons

All leaving patrons marked "I'm not happy with Patreon's features or services." as the reason for leaving, with quotes ranging from:

The billing changes are not great.


Patreon's new fees are unacceptable

In this article, I will explore the currently available methods for supporting sustainable Free Software development and compare their transaction fees.

My experience

My experience taking donations is very short. I announced my fund raising campaign on Patreon in October 2017.

Here's what I collected so far, vs the actual money spent by the contributors:

  • 2017-11-01: $140.42 / $162.50 = 86.41%
  • 2017-12-01: $163.05 / $187.50 = 86.96%

The numbers here are using the old Patreon rules that are going away this month.

Real numbers

method formula charged donated fee
old Patreon ??? $1.00 $0.86 14%
new Patreon 7.9% + $0.35 $1.38 $0.95 31%
    $2.41 $1.90 21%
    $5.50 $4.75 14%
OpenCollective 12.9% + $0.30 $1.33 $0.90 32%
    $2.36 $1.80 24%
    $5.45 $4.50 18%
Flattr 16.5% $1.00 $0.84 17%
    $2.00 $1.67 17%
    $5.00 $4.18 17%
Liberapay 0.585% $1.00 $0.99 1%

On Patreon

Just like everyone else, I'm not happy with the incoming change to the Patreon fees. But even after the change, it's still a better deal than OpenCollective, which is used quite successfully e.g. by CIDER.

Just to restate the numbers in the table, if all backers give $1 (which is the majority currently, and I actually would generally prefer 5 new $1 backers over 1 new $5 backer), with the old system I get $0.86, while with the new system it's $0.69. That's more than 100% increase in transaction fees.

On OpenCollective

It's more expensive than the new Patreon fees in every category or scenario.

On Flattr

Flattr is in the same bucket as Patreon, except with slightly lower fees currently. Their default plan sounds absolutely ridiculous to me: you install a browser plug-in so that a for-profit corporation can track which websites you visit most often in order to distribute the payments you give them among those websites.

If it were a completely local tool which doesn't upload any data on the internet and instead gives you a monthly report to adjust your donations, it would have been a good enough tool. Maybe with some adjustments for mind-share bubbles, which result in prominent projects getting more rewards than they can handle, while small projects fade away into obscurity without getting a chance. But right now it's completely crazy. Still, if you don't install the plug-in, you can probably still use Flattr and it will work similarly to Patreon.

I made an account, just in case, but I wouldn't recommend going to Flattr unless you're already there, or the first impression it made on me is wrong.

On Paypal

Paypal is OK in a way, since a lot of the time the organizations like Patreon are just middle men on top of Paypal. On the other hand, there's no way to set up recurring donations. And it's harder for me to plan decisions regarding my livelihood if I don't know at least approximately the sum I'll be getting next month.

My account, in case you want to make a lump sum donation:

On Bitcoin

Bitcoin is similar to Paypal, except it also:

  • has a very bad impact on the environment,
  • is a speculative bubble that supports either earning or losing money without actually providing value to the society.

I prefer to stay away from Bitcoin.


Liberapay sounds almost too good to be true. At the same time, their fees are very realistic, you could almost say optimal, since there are no fees for transfers between members. So you can spend either €20.64 (via card) or €20.12 (via bank wire) to charge €20 into your account and give me €1 per month at no further cost. If you change your mind after one month, you can withdraw your remaining €19 for free if you use a SEPA (Single Euro Payments Area) bank.

If I set out today to set up a service similar to Liberapay, even with my best intentions and the most optimistic expectations, I don't see how a better offer could be made. I recommend anyone who wants to support me to try it out. And, of course, I will report back with real numbers if anything comes out of it.

Thanks to all my patrons for their former and ongoing support. At one point we were at 30% of the monthly goal (25% atm.). This made me very excited and optimistic about the future. Although I'm doing Free Software for almost 5 years now, it's actually 3 years in academia and 2 years in industry. Right now, I'm feeling a burnout looming over the horizon, and I was really hoping to avoid it by spending less time working at for-profit corporations. Any help, either monetary or advice is appreciated. If you're a part of a Software Engineering or a Research collective that makes you feel inspired instead of exhausted in the evening and you have open positions in EU or on remote, have a look at my LinkedIn - maybe we could become colleagues in the future. I'll accept connections from anyone - if you're reading this blog, we probably have a lot in common; and it's always better together.

-1:-- Comparison of transaction fees on Patreon and similar services (Post)--L0--C0--December 09, 2017 11:00 PM

Manuel Uberti: Learning Haskell

Since my first baby steps in the world of Functional Programming, Haskell has been there. Like the enchanting music of a Siren, it has been luring me with promises of a new set of skills and a better understanding of the lambda calculus.

I refused to oblige at first. A bit of Scheme and my eventual move to Clojure occupied my mind and my daily activities. Truth be told, the odious warfare between dynamic types troopers and static types zealots didn’t help steering my enthusiasm towards Haskell.

Still, my curiosity is stoic and hard to kill and the Haskell Siren was becoming too tempting to resist any further. The Pragmatic Programmer in me knew it was the right thing to do. My knowledge portfolio is always reaching out for something new.

My journey began with the much praised Programming in Haskell. I kept track of the exercises only to soon discover this wasn’t the right book for me. A bit too terse and schematic, I needed something that could ease me in in a different way. I needed more focus on the basics, the roots of the language.

As I usually do, I sought help online. I don’t know many Haskell developers, but I know there are crazy guys in the Emacs community. Steve Purcell was kind and patient enough to introduce me to Haskell Programming From First Principles.

This is a huge book (nearly 1300 pages), but it just took the authors’ prefaces to hook me. Julie Moronuki words in particular resonated heavily with me. Unlike Julie I have experience in programming, but I felt exactly like her when it comes to approaching Haskell teaching materials.

So here I am, armed with Stack and Intero and ready to abandon myself to the depths and wonders of static typing and pure functional programming. I will track my progress and maybe report back here. I already have a project in mind, but my Haskell needs to get really good before starting any serious work.

May the lambda be with me.

-1:-- Learning Haskell (Post)--L0--C0--December 08, 2017 12:00 AM

Sanel Zukan: Distraction-free EWW surfing

Sometimes when I plan to read a longish html text, I fire up EWW, a small web browser that comes with Emacs.

However, reading pages on larger monitor doesn't provide good experience, at least not for me. Here is an example:


Let's fix that with some elisp code:

(defun eww-more-readable ()
  "Makes eww more pleasant to use. Run it after eww buffer is loaded."
  (setq eww-header-line-format nil)               ;; removes page title
  (setq mode-line-format nil)                     ;; removes mode-line
  (set-window-margins (get-buffer-window) 20 20)  ;; increases size of margins
  (redraw-display)                                ;; apply mode-line changes
  (eww-reload 'local))                            ;; apply eww-header changes

EWW already comes with eww-readable function, so I named it eww-more-readable.

Evaluate it and call with:

M-x eww-more-readable

Result is much better now:


EDIT: Chunyang Xu noticed that elisp code had balanced parentheses issue and also suggested to use (eww-reload 'local) to avoid re-fetching the page. Thanks!

-1:-- Distraction-free EWW surfing (Post)--L0--C0--November 30, 2017 11:00 PM

Pragmatic Emacs: Reorder TODO items in your org-mode agenda

I use org-mode to manage my to-do list with priorities and deadlines but inevitably I have multiple items without a specific deadline or scheduled date and that have the same priority. These appear in my agenda in the order in which they were added to my to-do list, but I’ll sometimes want to change that order. This can be done temporarily using M-UP or M-DOWN in the agenda view, but these changes are lost when the agenda is refreshed.

I came up with a two-part solution to this. The main part is a generic function to move the subtree at the current point to be the top item of all subtrees of the same level. Here is the function:

(defun bjm/org-headline-to-top ()
  "Move the current org headline to the top of its section"
  ;; check if we are at the top level
  (let ((lvl (org-current-level)))
     ;; above all headlines so nothing to do
     ((not lvl)
      (message "No headline to move"))
     ((= lvl 1)
      ;; if at top level move current tree to go above first headline
      ;; test if point is now at the first headline and if not then
      ;; move to the first headline
      (unless (looking-at-p "*")
        (org-next-visible-heading 1))
     ((> lvl 1)
      ;; if not at top level then get position of headline level above
      ;; current section and refile to that position. Inspired by
      (let* ((org-reverse-note-order t)
             (pos (save-excursion
                    (outline-up-heading 1)
             (filename (buffer-file-name))
             (rfloc (list nil filename nil pos)))
        (org-refile nil nil rfloc))))))

This will move any to-do item to the top of all of the items at the same level as that item. This is equivalent to putting the cursor on the headline you want to move and hitting M-UP until you reach the top of the section.

Now I want to be able to run this from the agenda-view, which is accomplished with the following function, which I then bind to the key 1 in the agenda view.

(defun bjm/org-agenda-item-to-top ()
    "Move the current agenda item to the top of the subtree in its file"
  ;; save buffers to preserve agenda
  ;; switch to buffer for current agenda item
  ;; move item to top
  ;; go back to agenda view
  (switch-to-buffer (other-buffer (current-buffer) 1))
  ;; refresh agenda

  ;; bind to key 1
  (define-key org-agenda-mode-map (kbd "1") 'bjm/org-agenda-item-to-top)

Now in my agenda view, I just hit 1 on a particular item and it is moved permanently to the top of its level (with deadlines and priorities still taking precedence in the final sorting order).

-1:-- Reorder TODO items in your org-mode agenda (Post Ben Maughan)--L0--C0--November 30, 2017 09:56 PM

Emacs café: Introducing Elbank

Elbank is a new Emacs package I’ve been working on lately. It’s a personal finances and budgeting package for Emacs that uses Weboob for scraping data from bank websites.

Overview buffer

I started building Elbank after using Ledger for several years. While Ledger is a real gem, I didn’t want to spend time doing bookkeeping anymore.

Instead, I wanted a simple reporting tool that would automatically scrap data and build reports within Emacs from it.

Setting up Weboob

To use Elbank, you will first have to install Weboob. Weboob is a collection of applications used to interact with websites from the command-line. Elbank uses the banking application named boobank to scrap data.

The list of currently supported bank websites is available on this page.

Fortunately, installing Weboob should be a breeze as there are packages for most GNU/Linux distros, and an homebrew formula for Mac users.

Once Weboob is installed, run boobank in a console to setup your accounts.

Installing Elbank

You can now install elbank from MELPA1 by running M-x package-install RET elbank RET, and voila!

Using Elbank

The overview buffer

Run M-x elbank-overview to get started. The overview buffer lists all accounts as custom reports and budgets.

Press u to import the bank statements from your bank website.

You can click on each account or report displayed in the buffer to open them.

Categorizing transactions

Transaction categories is an important aspect of Elbank. Categories make it possible to filter and budget.

Transactions are automatically categorized when reporting, using the custom variable elbank-categories.

Here’s an example value for elbank-categories, you should adjust it based on your own transactions and categorizing needs.

(setq elbank-categories
      '(("Expenses:Food" . ("^supermarket" 
                            "Local store XXX" 
                            "Bakery XXX"))
        ("Expenses:Rent" . ("Real Estate Agency XXX"))
        ("Income:Salary" . ("Bank transfer from Company XXX"))))

Each transaction’s text is matched against the regular expressions of elbank-categories, the first match defines the category of a transaction.


Evaluate M-x elbank-report to create a new report. The command will ask you for an account, period and category, which are all optional.


Here’s the list of keybindings available in a report buffer:

  • f c: Filter the transactions by category
  • f a: Only show transactions in a specified account
  • f p: Select the period of the report
  • G: Group transactions by some property
  • S: Sort transactions
  • s: Reverse the sort order
  • M-p: Move backward by one period (month or year)
  • M-n: Move forward by one period (month or year)

You can also customize the variable elbank-saved-monthly-reports and elbank-saved-yearly-reports to conveniently get a quick list of commonly used reports from the overview buffer.


The custom variable elbank-budget is used to define a monthy budget. It defines how much money we want to spend by category of transaction, like "Food" or "Rent".

(setq elbank-budget '(("Expenses:Food" . 300)
                      ("Expenses:Rent" . 450)
                      ("Expenses:Transport" . 120)
                      ("Expenses:Utilities" . 145)))

Note that budgeted amounts are positive numbers while expenses have negative values.

Press b from the overview buffer or evaluate M-x elbank-budget-report to see your expenses based on your budget.

Budget report

You can switch periods with M-p and M-n the same way as in report buffers.


That’s all for now!

Elbank is still in its infancy, but I’m already using it daily. If you find any bug or would like to suggest improvements, feel free to open a ticket on the GitHub project.

  1. As of today, the package is not yet in MELPA, but a pull request is in review to add it to the repository. 

-1:-- Introducing Elbank (Post Nicolas Petton)--L0--C0--November 30, 2017 01:28 PM

emacspeak: Emacspeak 47.0 (GentleDog) Unleashed!

Emacspeak 47.0—GentleDog—Unleashed!

*For Immediate Release:

San Jose, Calif., (November 22, 2017)

Emacspeak 47.0 (GentleDog):
Redefining Accessibility In The Age Of User-Aware Interfaces
–Zero cost of Ownership makes priceless software Universally affordable!

Emacspeak Inc (NASDOG: ESPK) —
— announces the immediate world-wide availability of Emacspeak 47.0
(GentleDog) — a powerful audio desktop for leveraging today's
evolving Data, Social and Assistant-Oriented Internet cloud.

1 Investors Note:

With several prominent tweeters expanding coverage of #emacspeak,
NASDOG: ESPK has now been consistently trading over the social net at
levels close to that once attained by DogCom high-fliers—and as of
2017 is trading at levels close to that achieved by once better known
stocks in the tech sector.

2 What Is It?

Emacspeak is a fully functional audio desktop that provides complete
eyes-free access to all major 32 and 64 bit operating environments. By
seamlessly blending live access to all aspects of the Internet such as
ubiquitous assistance, Web-surfing, blogging, social computing and
electronic messaging into the audio desktop, Emacspeak enables speech
access to local and remote information with a consistent and
well-integrated user interface. A rich suite of task-oriented tools
provides efficient speech-enabled access to the evolving
assistant-oriented social Internet cloud.

3 Major Enhancements:

This version requires emacs-25.1 or later.

  1. speech-Enable Extensible EVIL — VI Layer: ⸎
  2. Bookshare — Support Additional downloads (epub3,mp3): 🕮
  3. Bookmark support for EBooks in EWW 📔
  4. Speech-Enable VDiff — A Diff tool: ≏
  5. Speech-enable Package shx —Shell Extras For Emacs: 🖁
  6. Updated IDO Support: ⨼
  7. Implemented NOAA Weather API: ☔
  8. Speech-Enable Typographic Editting Support: 🖶
  9. Speech-Enable Package Origami: 🗀
  10. Magit Enhancements for Magitians: 🎛
  11. Speech-Enable RipGrep Front-End: ┅
  12. Added SmartParen Support: 〙
  13. Speech-enabled Minesweeper game: 🤯

    • And a lot more than wil fit this margin. … 🗞

4 Establishing Liberty, Equality And Freedom:

Never a toy system, Emacspeak is voluntarily bundled with all
major Linux distributions. Though designed to be modular,
distributors have freely chosen to bundle the fully integrated
system without any undue pressure—a documented success for
the integrated innovation embodied by Emacspeak. As the system
evolves, both upgrades and downgrades continue to be available at
the same zero-cost to all users. The integrity of the Emacspeak
codebase is ensured by the reliable and secure Linux platform
used to develop and distribute the software.

Extensive studies have shown that thanks to these features, users
consider Emacspeak to be absolutely priceless. Thanks to this
wide-spread user demand, the present version remains priceless
as ever—it is being made available at the same zero-cost as
previous releases.

At the same time, Emacspeak continues to innovate in the area of
eyes-free Assistance and social interaction and carries forward the
well-established Open Source tradition of introducing user interface
features that eventually show up in luser environments.

On this theme, when once challenged by a proponent of a crash-prone
but well-marketed mousetrap with the assertion "Emacs is a system from
the 70's", the creator of Emacspeak evinced surprise at the unusual
candor manifest in the assertion that it would take popular
idiot-proven interfaces until the year 2070 to catch up to where the
Emacspeak audio desktop is today. Industry experts welcomed this
refreshing breath of Courage Certainty and Clarity (CCC) at a time
when users are reeling from the Fear Uncertainty and Doubt (FUD)
unleashed by complex software systems backed by even more convoluted
press releases.

5 Independent Test Results:

Independent test results have proven that unlike some modern (and
not so modern) software, Emacspeak can be safely uninstalled without
adversely affecting the continued performance of the computer. These
same tests also revealed that once uninstalled, the user stopped
functioning altogether. Speaking with Aster Labrador, the creator of
Emacspeak once pointed out that these results re-emphasize the
user-centric design of Emacspeak; "It is the user –and not the
computer– that stops functioning when Emacspeak is uninstalled!".

5.1 Note from Aster,Bubbles and Tilden:

UnDoctored Videos Inc. is looking for volunteers to star in a
video demonstrating such complete user failure.

6 Obtaining Emacspeak:

Emacspeak can be downloaded from GitHub –see you can visit Emacspeak on the
WWW at You can subscribe to the emacspeak
mailing list — — by sending mail to the
list request address The Emacspeak
is a good source for news about recent enhancements and how to
use them.

The latest development snapshot of Emacspeak is always available via
Git from GitHub at
Emacspeak GitHub .

7 History:

  • Emacspeak 47.0 (GentleDog) goes the next step in being helpful
    while letting users learn and grow.
  • Emacspeak 46.0 (HelpfulDog) heralds the coming of Smart Assistants.
  • Emacspeak 45.0 (IdealDog) is named in recognition of Emacs'
    excellent integration with various programming language
    environments — thanks to this, Emacspeak is the IDE of choice
    for eyes-free software engineering.
  • Emacspeak 44.0 continues the steady pace of innovation on the
    audio desktop.
  • Emacspeak 43.0 brings even more end-user efficiency by leveraging the
    ability to spatially place multiple audio streams to provide timely
    auditory feedback.
  • Emacspeak 42.0 while moving to GitHub from Google Code continues to
    innovate in the areas of auditory user interfaces and efficient,
    light-weight Internet access.
  • Emacspeak 41.0 continues to improve
    on the desire to provide not just equal, but superior access —
    technology when correctly implemented can significantly enhance the
    human ability.
  • Emacspeak 40.0 goes back to Web basics by enabling
    efficient access to large amounts of readable Web content.
  • Emacspeak 39.0 continues the Emacspeak tradition of increasing the breadth of
    user tasks that are covered without introducing unnecessary
  • Emacspeak 38.0 is the latest in a series of award-winning
    releases from Emacspeak Inc.
  • Emacspeak 37.0 continues the tradition of
    delivering robust software as reflected by its code-name.
  • Emacspeak 36.0 enhances the audio desktop with many new tools including full
    EPub support — hence the name EPubDog.
  • Emacspeak 35.0 is all about
    teaching a new dog old tricks — and is aptly code-named HeadDog in
    on of our new Press/Analyst contact. emacspeak-34.0 (AKA Bubbles)
    established a new beach-head with respect to rapid task completion in
    an eyes-free environment.
  • Emacspeak-33.0 AKA StarDog brings
    unparalleled cloud access to the audio desktop.
  • Emacspeak 32.0 AKA
    LuckyDog continues to innovate via open technologies for better
  • Emacspeak 31.0 AKA TweetDog — adds tweeting to the Emacspeak
  • Emacspeak 30.0 AKA SocialDog brings the Social Web to the
    audio desktop—you cant but be social if you speak!
  • Emacspeak 29.0—AKAAbleDog—is a testament to the resilliance and innovation
    embodied by Open Source software—it would not exist without the
    thriving Emacs community that continues to ensure that Emacs remains
    one of the premier user environments despite perhaps also being one of
    the oldest.
  • Emacspeak 28.0—AKA PuppyDog—exemplifies the rapid pace of
    development evinced by Open Source software.
  • Emacspeak 27.0—AKA
    FastDog—is the latest in a sequence of upgrades that make previous
    releases obsolete and downgrades unnecessary.
  • Emacspeak 26—AKA
    LeadDog—continues the tradition of introducing innovative access
    solutions that are unfettered by the constraints inherent in
    traditional adaptive technologies.
  • Emacspeak 25 —AKA ActiveDog
    —re-activates open, unfettered access to online
  • Emacspeak-Alive —AKA LiveDog —enlivens open, unfettered
    information access with a series of live updates that once again
    demonstrate the power and agility of open source software
  • Emacspeak 23.0 — AKA Retriever—went the extra mile in
    fetching full access.
  • Emacspeak 22.0 —AKA GuideDog —helps users
    navigate the Web more effectively than ever before.
  • Emacspeak 21.0
    —AKA PlayDog —continued the
    Emacspeak tradition of relying on enhanced
    productivity to liberate users.
  • Emacspeak-20.0 —AKA LeapDog —continues
    the long established GNU/Emacs tradition of integrated innovation to
    create a pleasurable computing environment for eyes-free
  • emacspeak-19.0 –AKA WorkDog– is designed to enhance
    user productivity at work and leisure.
  • Emacspeak-18.0 –code named
    GoodDog– continued the Emacspeak tradition of enhancing user
    productivity and thereby reducing total cost of
  • Emacspeak-17.0 –code named HappyDog– enhances user
    productivity by exploiting today's evolving WWW
  • Emacspeak-16.0 –code named CleverDog– the follow-up to
    SmartDog– continued the tradition of working better, faster,
  • Emacspeak-15.0 –code named SmartDog–followed up on TopDog
    as the next in a continuing series of award-winning audio desktop
    releases from Emacspeak Inc.
  • Emacspeak-14.0 –code named TopDog–was

the first release of this millennium.

  • Emacspeak-13.0 –codenamed
    YellowLab– was the closing release of the
    20th. century.
  • Emacspeak-12.0 –code named GoldenDog– began
    leveraging the evolving semantic WWW to provide task-oriented speech
    access to Webformation.
  • Emacspeak-11.0 –code named Aster– went the
    final step in making Linux a zero-cost Internet access solution for
    blind and visually impaired users.
  • Emacspeak-10.0 –(AKA
    Emacspeak-2000) code named WonderDog– continued the tradition of
    award-winning software releases designed to make eyes-free computing a
    productive and pleasurable experience.
  • Emacspeak-9.0 –(AKA
    Emacspeak 99) code named BlackLab– continued to innovate in the areas
    of speech interaction and interactive accessibility.
  • Emacspeak-8.0 –(AKA Emacspeak-98++) code named BlackDog– was a major upgrade to
    the speech output extension to Emacs.
  • Emacspeak-95 (code named Illinois) was released as OpenSource on
    the Internet in May 1995 as the first complete speech interface
    to UNIX workstations. The subsequent release, Emacspeak-96 (code
    named Egypt) made available in May 1996 provided significant
    enhancements to the interface. Emacspeak-97 (Tennessee) went
    further in providing a true audio desktop. Emacspeak-98
    integrated Internetworking into all aspects of the audio desktop
    to provide the first fully interactive speech-enabled WebTop.

8 About Emacspeak:

Originally based at Cornell (NY) — —home to Auditory User
Interfaces (AUI) on the WWW, Emacspeak is now maintained on GitHub The system is mirrored
world-wide by an international network of software archives and
bundled voluntarily with all major Linux distributions. On Monday,
April 12, 1999, Emacspeak became part of the Smithsonian's Permanent
Research Collection
on Information Technology at the Smithsonian's
National Museum of American History.

The Emacspeak mailing list is archived at Vassar –the home of the
Emacspeak mailing list– thanks to Greg Priest-Dorman, and provides a
valuable knowledge base for new users.

9 Press/Analyst Contact: Tilden Labrador

Going forward, Tilden acknowledges his exclusive monopoly on
setting the direction of the Emacspeak Audio Desktop, and
promises to exercise this freedom to innovate and her resulting
power responsibly (as before) in the interest of all dogs.

*About This Release:

Windows-Free (WF) is a favorite battle-cry of The League Against
Forced Fenestration (LAFF). –see for details on
the ill-effects of Forced Fenestration.

CopyWrite )C( Aster, Hubbell and Tilden Labrador. All Writes Reserved.
HeadDog (DM), LiveDog (DM), GoldenDog (DM), BlackDog (DM) etc., are Registered
Dogmarks of Aster, Hubbell and Tilden Labrador. All other dogs belong to
their respective owners.

-1:-- Emacspeak 47.0 (GentleDog) Unleashed! (Post T. V. Raman ( 22, 2017 12:42 AM

Pragmatic Emacs: Pop up a quick shell with shell-pop

There are several ways of running a shell inside Emacs. I don’t find that I need to use it very often as I do so much within Emacs these days, but when I do it’s handy to quickly bring up the shell, run a command the then dismiss it again. The shell-pop package does this very smartly. One key combo (I use C-t) pops up a shell window for the directory containing the file you are currently editing, and then C-t dismisses the shell window when you are done.

The github page has lots of details on how to configure it, and I use a fairly minimal setup and use the ansi-term terminal emulator and zsh as my shell. Here is my configuration:

(use-package shell-pop
  :bind (("C-t" . shell-pop))
  (setq shell-pop-shell-type (quote ("ansi-term" "*ansi-term*" (lambda nil (ansi-term shell-pop-term-shell)))))
  (setq shell-pop-term-shell "/bin/zsh")
  ;; need to do this manually or not picked up by `shell-pop'
  (shell-pop--set-shell-type 'shell-pop-shell-type shell-pop-shell-type))

The last line is needed but I can’t remember where I got it from!

-1:-- Pop up a quick shell with shell-pop (Post Ben Maughan)--L0--C0--November 20, 2017 10:06 PM

Flickr tag 'emacs': GAMS mode: the basic screenshot.

shiro.takeda posted a photo:

GAMS mode: the basic screenshot.

Screenshots of GAMS mode for Emacs.

-1:-- GAMS mode: the basic screenshot. (Post shiro.takeda ( 19, 2017 07:25 AM

Flickr tag 'emacs': GAMS mode: the basic screenshot.

shiro.takeda posted a photo:

GAMS mode: the basic screenshot.

Screenshots of GAMS mode for Emacs.

-1:-- GAMS mode: the basic screenshot. (Post shiro.takeda ( 19, 2017 07:25 AM

punchagan: Multiple remotes with nullmailer

This a reference for future-me, and possibly someone pulling off an all-nighter trying to get nullmailer to use the correct “remote”.

What is nullmailer and why use it?

Nullmailer is a simple mail transfer agent that can forward mail to a remote mail server (or a bunch of them).

I use Emacs to send email, and it can be configured to talk to a remote SMTP server to send email. But, this blocks Emacs until the email is sent and the connection closed. This is annoying, and having nullmailer installed locally basically lets Emacs delegate this job without blocking.

Why multiple remotes?

I have multiple email accounts, and I’d like to use the correct remote server for sending email based on the FROM address.

I expected nullmailer to have some configuration to be able to specify this. But, it turns out that nullmailer just forwards the email to all the configured remotes until one of them succeeds.

How do we, then, send email from the correct remote SMTP server?

Currently, I have two remotes - my personal domain ( and GMail.

Having GMail as the first remote in nullmailer’s configuration wouldn’t let me send emails from my personal domain. GMail seems to agree to send the email coming from, but overwrite the MAIL FROM address and change it to my GMail address.

So, has to be the first remote. But, this server also seemed to accept and send emails with a FROM address. This was causing emails sent from my GMail ID to go into spam, as expected.

I had to reconfigure this mail server to reject relaying mails that didn’t belong to the correct domain names – i.e., reject relaying emails which had in the FROM address.

smtpd_sender_restrictions had to modified to have reject_sender_login_reject along with other values, and the smtpd_sender_login_maps had to be set to allow only the domain. This serverfault answer explains this in more detail.

-1:-- Multiple remotes with nullmailer (Post)--L0--C0--November 17, 2017 11:20 PM

William Denton: Org clocktables II: Summarizing a month

In Org clocktables I: The daily structure I explained how I track my time working at an academic library, clocking in to projects that are either categorized as PPK (“professional performance and knowledge,” our term for “librarianship,”), PCS (“professional contributions and standing”, which covers research, professional development and the like) and Service. I do this by checking in and out of tasks with the magic of Org.

I’ll add a day to the example I used before, to make it more interesting. This is what the raw text looks like:

* 2017-12 December

** [2017-12-01 Fri]
CLOCK: [2017-12-01 Fri 09:30]--[2017-12-01 Fri 09:50] =>  0:20
CLOCK: [2017-12-01 Fri 13:15]--[2017-12-01 Fri 13:40] =>  0:25

*** PPK

**** Libstats stuff
CLOCK: [2017-12-01 Fri 09:50]--[2017-12-01 Fri 10:15] =>  0:25

Pull numbers on weekend desk activity for A.

**** Ebook usage
CLOCK: [2017-12-01 Fri 13:40]--[2017-12-01 Fri 16:30] =>  2:50

Wrote code to grok EZProxy logs and look up ISBNs of Scholars Portal ebooks.

*** PCS

*** Service

**** Stewards' Council meeting
CLOCK: [2017-12-01 Fri 10:15]--[2017-12-01 Fri 13:15] =>  3:00

Copious meeting notes here.

** [2017-12-04 Mon]
CLOCK: [2017-12-04 Mon 09:30]--[2017-12-04 Mon 09:50] =>  0:20
CLOCK: [2017-12-04 Mon 12:15]--[2017-12-04 Mon 13:00] =>  0:45
CLOCK: [2017-12-04 Mon 16:00]--[2017-12-04 Mon 16:15] =>  0:15

*** PPK

**** ProQuest visit
CLOCK: [2017-12-04 Mon 09:50]--[2017-12-04 Mon 12:15] =>  2:25

Notes on this here.

**** Math print journals
CLOCK: [2017-12-04 Mon 16:15]--[2017-12-04 Mon 17:15] =>  1:00

Check current subs and costs; update list of print subs to drop.

*** PCS

**** Pull together sonification notes
CLOCK: [2017-12-04 Mon 13:00]--[2017-12-04 Mon 16:00] =>  3:00

*** Service

All raw Org text looks ugly, especially all those LOGBOOK and PROPERTIES drawers. Don’t let that put you off. This is what it looks like on my screen with my customizations (see my .emacs for details):

Much nicer in Emacs. Much nicer in Emacs.

At the bottom of the month I use Org’s clock table to summarize all this.

#+BEGIN: clocktable :maxlevel 3 :scope tree :compact nil :header "#+NAME: clock_201712\n"
#+NAME: clock_201712
| Headline             | Time  |      |      |
| *Total time*           | *14:45* |      |      |
| 2017-12 December     | 14:45 |      |      |
| \_  [2017-12-01 Fri] |       | 7:00 |      |
| \_    PPK            |       |      | 3:15 |
| \_    Service        |       |      | 3:00 |
| \_  [2017-12-04 Mon] |       | 7:45 |      |
| \_    PPK            |       |      | 3:25 |
| \_    PCS            |       |      | 3:00 |

I just put in the BEGIN/END lines and then hit C-c C-c and Org creates that table. Whenever I add some more time, I can position the pointer on the BEGIN line and hit C-c C-c and it updates everything.

Now, there are lots of commands I could use to customize this, but this is pretty vanilla and it suits me. It makes it clear how much time I have down for each day and how much time I spent in each of the three pillars. It’s easy to read at a glance. I fiddled with various options but decided to stay with this.

It looks like this on my screen:

Much nicer in Emacs. Much nicer in Emacs.

That’s a start, but the data is not in a format I can use as is. The times are split across different columns, there are multiple levels of indents, there’s a heading and a summation row, etc. But! The data is in a table in Org, which means I can easily ingest it and process it in any language I choose, in the same Org file. That’s part of the power of Org: it turns raw data into structured data, which I can process with a script into a better structure, all in the same file, mixing text, data and output.

Which language, though? A real Emacs hacker would use Lisp, but that’s beyond me. I can get by in two languages: Ruby and R. I started doing this in Ruby, and got things mostly working, then realized how it should go and what the right steps were to take, and switched to R.

Here’s the plan:

  • ignore “Headline” and “Total time” and “2017-12 December” … in fact, ignore everything that doesn’t start with “\_”
  • clean up the remaining lines by removing “\_”
  • the first line will be a date stamp, with the total day’s time in the first column, so grab it
  • after that, every line will either be a PPK/PCS/Service line, in which case grab that time
  • or it will be a new date stamp, in which case capture that information and write out the previous day’s information
  • continue on through all the lines
  • until the end, at which point a day is finished but not written out, so write it out

I did this in R, using three packages to make things easier. For managing the time intervals I’m using hms, which seems like a useful tool. It needs to be a very recent version to make use of some time-parsing functions, so it needs to be installed from GitHub. Here’s the R:

library(hms) ## Right now, needs GitHub version
clean_monthly_clocktable <- function (raw_clocktable) {
  ## Clean up the table into something simple
  clock <- raw_clocktable %>% filter(grepl("\\\\_", Headline)) %>% mutate(heading = str_replace(Headline, "\\\\_ *", "")) %>% mutate(heading = str_replace(heading, "] .*", "]")) %>% rename(total = X, subtotal = X.1) %>% select(heading, total, subtotal)

  ## Set up the table we'll populate line by line
  newclock <- tribble(~date, ~ppk, ~pcs, ~service, ~total)

  ## The first line we know has a date and time, and always will
  date_old <- substr(clock[1,1], 2, 11)
  total_time_old <- clock[1,2]
  date_new <- NA
  ppk <- pcs <- service <- vacation <- total_time_new <- "0:00"

  ## Loop through all lines ...
  for (i in 2:nrow(clock)) {
    if      (clock[i,1] == "PPK")     { ppk      <- clock[i,3] }
    else if (clock[i,1] == "PCS")     { pcs      <- clock[i,3] }
    else if (clock[i,1] == "Service") { service  <- clock[i,3] }
    else {
     date_new <- substr(clock[i,1], 2, 11)
     total_time_new <- clock[i,2]
    ## When we see a new date, add the previous date's details to the table
    if (! {
     newclock <- newclock %>% add_row(date = date_old, ppk, pcs, service, total = total_time_old)
     ppk <- pcs <- service <- "0:00"
     date_old <- date_new
     date_new <- NA
     total_time_old <- total_time_new

  ## Finally, add the final date to the table, when all the rows are read.
  newclock <- newclock %>% add_row(date = date_old, ppk, pcs, service, total = total_time_old)
  newclock <- newclock %>% mutate(ppk = parse_hm(ppk), pcs = parse_hm(pcs), service = parse_hm(service), total = parse_hm(total), lost = as.hms(total - (ppk + pcs + service))) %>% mutate(date = as.Date(date))

All of that is in a SRC block like below, but I separated the two in case it makes the syntax highlighting clearer. I don’t think it does, but such is life. Imagine the above code pasted into this block:

#+BEGIN_SRC R :session :results values


Running C-c C-c on that will produce no output, but it does create an R session and set up the function. (Of course, all of this will fail if you don’t have R (and those three packages) installed.)

With that ready, now I can parse that monthly clocktable by running C-c C-c on this next source block, which reads in the raw clock table (note the var setting, which matches the #+NAME above), parses it with that function, and outputs cleaner data. I have this right below the December clock table.

#+BEGIN_SRC R :session :results values :var clock_201712=clock_201712 :colnames yes

|       date |      ppk |      pcs |  service |    total |     lost |
| 2017-12-01 | 03:15:00 | 00:00:00 | 03:00:00 | 07:00:00 | 00:45:00 |
| 2017-12-04 | 03:25:00 | 03:00:00 | 00:00:00 | 07:45:00 | 01:20:00 |

This is tidy data. It looks this this:

Again, in Emacs Again, in Emacs

That’s what I wanted. The code I wrote to generate it could be better, but it works, and that’s good enough.

Notice all of the same dates and time durations are there, but they’re organized much more nicely—and I’ve added “lost.” The “lost” count is how much time in the day was unaccounted for. This includes lunch (maybe I’ll end up classifying that differently), short breaks, ploughing through email first thing in the morning, catching up with colleagues, tidying up my desk, falling into Wikipedia, and all those other blocks of time that can’t be directly assigned to some project.

My aim is to keep track of the “lost” time and to minimize it, by a) not wasting time and b) properly classifying work. Talking to colleagues and tidying my desk is work, after all. It’s not immortally important work that people will talk about centuries from now, but it’s work. Not everything I do on the job can be classified against projects. (Not the way I think of projects—maybe lawyers and doctors and the self-employed think of them differently.)

The one technical problem with this is that when I restart Emacs I need to rerun the source block with the R function in it, to set up the R session and the function, before I can rerun the simple “update the monthly clocktable” block. However, because I don’t restart Emacs very often, that’s not a big problem.

The next stage of this is showing how I summarize the cleaned data to understand, each month, how much of my time I spent on PPK, PCS and Service. I’ll cover that in another post.

-1:-- Org clocktables II: Summarizing a month (Post William Denton)--L0--C0--November 17, 2017 04:46 AM

Chen Bin (redguardtoo): counsel-etags v1.3.1 is released

Counsel-etags is a generic solution for code navigation in Emacs.

It basically needs no setup. For example, one command counsel-etags-find-tag-at-point is enough to start code navigation immediately.

The package solves all the problems using Ctags/Etags with Emacs.

Problem 1: Ctags takes a few seconds to update the tags file (the index file to lookup tags). The updating process blocks the user's further interaction. This problem is solved by the virtual updating function from counsel-etags. The setup is simple:

;; Don't ask before rereading the TAGS files if they have changed
(setq tags-revert-without-query t)
;; Don't warn when TAGS files are large
(setq large-file-warning-threshold nil)
;; Setup auto update now
(add-hook 'prog-mode-hook
  (lambda ()
    (add-hook 'after-save-hook
              'counsel-etags-virtual-update-tags 'append 'local)))
(add-hook 'after-save-hook 'counsel-etags-virtual-update-tags)

Problem 2: Tag lookup may fail if the latest code is not scanned yet. This problem is solved by running counsel-etags-grep automatically if counsel-etags-find-tag-at-point fails. So users always get results.

There are also other enhancements.

Enhancement 1: Levenshtein Distance algorithm is used to place the better matching candidates at the the top. For example, a function named renderTable could be defined all around in a ReactJS project. But it's very possible the user prefers the definition in same component or same folder where she triggers code navigation.

Enhancement 2: It's inefficient to search the same tag again and again. counsel-etags-recent-tag is used to jump to previous definitions.

Enhancement 3: Ivy-mode provides filter UI for counsel-etags. Its means all the functionalities from Ivy is also available. For example, users can input "!keyword1" to exclude candidates matching "keyword1".

Enhancement 4: counsel-etags-grep uses the fastest grep program ripgrep if it's installed. Or else it falls back to standard grep.

Please check for more tips.

-1:-- counsel-etags v1.3.1 is released (Post Chen Bin)--L0--C0--November 12, 2017 09:40 AM