Got Emacs?: Update your package search url as is now

It's been a couple of days but the url has changed to You may need to change your .emacs file  from ("melpa" . "") to ("melpa" . "")
-1:-- Update your package search url as is now (Post sivaram ( 26, 2014 02:09 AM

Endless Parentheses: Aggressive-indent just got better!

aggressive-indent is quite something. I've only just released it and it seems to have been very well received. As such, it's only fair that I invest a bit more time into it.

The original version was really just a hack that was born as an answer on Emacs.SE. It worked phenomenally well on emacs-lisp-mode (to my delight), but it lagged a bit on c-like modes.

The new version, which is already on Melpa, is much smarter and more optimised. It should work quite well on any mode where automatic indentation makes sense (python users, voice your suggestions).

As a bonus, here's a stupendous screencast, courtesy of Tu Do!



Instructions are still the same! So long as you have Melpa configured, you can install it with.

M-x package-install RET aggressive-indent

Then simply turn it on and you’ll never have unindented code again.


You can also turn it on locally with.

(add-hook 'emacs-lisp-mode-hook #'aggressive-indent-mode)

Comment on this.

-1:-- Aggressive-indent just got better! (Post)--L0--C0--October 25, 2014 12:00 AM

Irreal: let with Lexical and Dynamic Scope

Artur Malabarba points to this excellent Stack Exchange entry on the speed of let with lexical versus dynamic scope. Malabarba asks why let is faster with lexical scope than it is with dynamic scope. lunaryorn provides an excellent and detailed answer that shows the generated byte code for both cases.

The TL;DR is that using dynamic scope means that the let variables have to be looked up in the global scope, set, and then reset after use, while using lexical scope just makes the variables local and avoids all lookup and setting/resetting. That may sound a little opaque but lunaryord's answer explains things in a very understandable way.

Generally, I don't worry too much about speed in the Elisp I write because it's mostly just simple functions that run quickly no matter how ham handed my coding is. If you write functions that have “long” running times, it's worthwhile to take the lessons in lunaryord's answer into account. It is, in any event interesting and worth knowing for the day you need it.

-1:-- let with Lexical and Dynamic Scope (Post jcs)--L0--C0--October 24, 2014 02:42 PM

Irreal: Lots of New

Here at the International Irreal Headquarters there are lots of new things to explore and play with. First, there is OS X Yosemite, the new Apple OS. I've been playing with it for a few days and like it so far. It did take a little grief for its overly-flat UI but that doesn't bother me. I haven't really explored the new things yet so perhaps I'll comment on those later.

Second, Emacs 24.4! At long last. Again, I've mostly just got things compiled and set up so any comments on its features will have to wait. In the meantime, you've got Artur Malabarba, Mickey Petersen, and the many posts of Bozhidar Batsov to help you explore Emacs 24.4 if you haven't already read them.

Along with the new Emacs, Bastien Guerry has pushed out a point release, 8.2.10, of Org mode. Org mode just keeps getting better and better and Bastien has done a great job in driving its development.

Finally, the best new thing of all

FullSizeRender 2.jpg

My old MacBook Pro is still in great shape and is a real work horse but it's over 7 years old now. The main problems are that the 120G disk is pretty much full making it hard to deal with updates and difficult to store much more data on it. Since it has only 2G of RAM, it tends to get very slow when I have more than, say, Emacs and Safari open. My new machine (Manfred II for now) is a 13 inch MacBook Pro with 16G of memory and a 512 GB SST. I can't tell you how fast this thing feels. It's also a bit lighter than my 15 inch MacBook Pro. I considered getting the 15 inch model—mainly to get four cores—but it was heavier and, really, I don't do enough CPU bound computing to make that an issue.

So all in all, Christmas has come early to Irreal.

-1:-- Lots of New (Post jcs)--L0--C0--October 23, 2014 11:57 AM

Emacs Redux: Emacs 24.4

Emacs 24.4 is finally out!

You can read about all the new features here. I’ve published a series of articles about some of the more interesting features.

In related news - the Emacs Mac Port based on 24.4 has also been released.

-1:-- Emacs 24.4 (Post)--L0--C0--October 21, 2014 02:36 PM

Got Emacs?: Emacs 24.4 released

Well, here's the official release mail on Emacs 24.4.  The Windows versions aren't out yet but will turn up in a few days.
-1:-- Emacs 24.4 released (Post sivaram ( 21, 2014 03:18 AM

Mickey Petersen: Four year anniversary and new website

Welcome to the new Mastering Emacs website. After four years (yes, four!) it’s time for a site refresh. The old site was never meant to last that long; it was a temporary theme hastily picked so I could start writing about Emacs. Back then there were fewer blogs and Emacs resources. We didn’t even have a package manager.

The old site did serve its purpose as a launchpad for my blogging adventures. But Apache/WordPress is slow out of the box, even with SuperCache. A slight breeze and the thing would fall over — and it did, every time it featured on HackerNews or high-traffic subreddits.

Eventually I moved to FastCGI and nginx to host WordPress, but as it’s not officially supported it was a major pain to get working. António P. P. Almeida’s wordpress-nginx made my life so much easier and the site so much faster.

Alas, it’s time to retire the old site. Over the years I came to a number of conclusions:

People don’t use tags I spent a good amount of time adding tags to every article I wrote, but almost no one ever really used them. Sure people did click on them, but overall the reading guide proved far more useful. My goal is to re-implement a “tag”-like system but around concepts (Shells, Dired, etc.) instead of tags.

Not enough categories I had categories like “For Beginners”, “Tutorials”, and so on. They worked OK, but I am of the opinion now that manually curating my content makes more sense. Automatic content generation’s fine but throwing articles into a two or three baskets is never good enough.

Spammers are getting smarter I had to ditch Akismet, a free anti-spam checker for WordPress, after several years of near-perfect operation. The spammers simply mimicked humans too much and the filter would trip up on real content. I eventually switched to manual approval but that’s a lot of work.

Encourage visitors to read other articles A lot of visitors would leave after reading a single article, even though I would often have several related articles. I tried some of the “Suggested Content” plugins but they were universally terrible — another mark against content automation.

Apache is a memory hog Yes, yes. I am sure you can tame Apache and make it into a lithe and agile webserver but my best efforts failed me. The second I switched to nginx the memory and CPU usage dropped like a rock. Not to mention that nginx is much easier to configure.

So what about the new site then? Well it’s custom written for the job, though I may one day open source the blog engine. I launched it Tuesday the 14th of October, and immediately my site got slammed by reddit, Twitter and Hackernews on the announcement of Emacs 24.4. Talk about baptism by fire! The site held up just fine though.

The stack is Python and Flask running PostgreSQL with nginx as a reverse proxy and uWSGI as the application server, and with memcached for page caching. It took about three weeks of casual coding to write it, including the harrowing experience of having to convert the old blog articles — but more on that in a bit.

I opted for Memcached over Redis as my needs were simple, and because nginx ships with memcached support meaning nginx could short-circuit the trip to my upstream application server should the need ever arise. For now it just goes to uWSGI which checks the cache and returns the cached copy. That’s actually more than quick enough to survive HackerNews, the most high-traffic site visits I’ve gotten.

The slowness comes from page generation and not querying the database (databases are fast, Python is not) so that’s where memcached comes in. I thought about using nginx’s own proxy cache mechanism but invalidating the cache when you add a new comment or when I edit a page is messy.

Converting the blog articles proved a greater challenge than you might think. First of all, I like reStructuredText so I wanted to write and edit my articles in rST and convert them automatically to HTML when I publish them.

Enter Pandoc, which is a fine tool for the job. But there’s a snag. The original WordPress format is pseudo-HTML, meaning blank lines signify new paragraphs. Converting that without spending too much time with a hand-rolled, one-off state machine to convert to “real HTML” (for Pandoc to convert to rST) involved some compromises and hand editing. (And no, wrapping text blocks in paragraph tags is not enough when you have <pre> tags with newlines and other tag flotsam.)

So that was painful.

Coming up with a new design proved a fun challenge as well. CSS has come a long way in four years and things like text-justified automatic hyphenation work great (unless you’re on Chrome, in which case it’s the dark ages for you) on both Firefox and IE. Drop caps, ligatures, kerning and old-style numerals also work well and is possible in CSS alone. I’m surprised how good HTML/CSS is at typesetting nowadays. The font is Cardo, an open source font inspired by Monotype’s Bembo, a font itself inspired by Aldus Manutius’ from the 1500s, which I originally wanted to use but it’s way, way, WAY too expensive for web font use. If you’re a Chrome user on Windows the font will look weird as Chrome does not see fit to grace your eyes with aliasing. Again, both Firefox and IE render properly.

I opted for larger font sizes than normal in the belief that: it’s not the 1990s any more, and big font sizes mean people won’t have to zoom in or squint their eyes. Or at least that’s what I always end up doing, and my vision’s perfectly fine. Apparently doing that was a mistake: the amount of vitriol I received from certain quarters of the internet for having large font sizes was… perplexing to say the least.

So I made the fonts smaller.

The site’s still undergoing changes and I plan on adding to it over time. I am particularly keen on getting people to explore my site and learn more about Emacs.

Here’s to another four years.


-1:-- Four year anniversary and new website (Post)--L0--C0--October 20, 2014 10:41 AM

Endless Parentheses: Kill Entire Line with Prefix Argument

I won’t repeat myself on the usefulness of prefix arguments, though it would hardly be an overstatement. Killing 7 lines of text in two keystrokes is a bliss most people will never know.

Today's post was prompted by this question on Emacs Stack Exchange. Itsjeyd has grown tired of using 3 keystrokes to kill a line (C-a C-k C-k) and asks for an alternative. The straightforward answer is to use kill-whole-line instead, but then you either need another keybind, C-S-backspace, or you need to lose the regular kill-line functionality.

The solution I've found for myself is to use prefix arguments.
You see, killing half a line is a useful feature, but slaughtering three and a half lines make very little sense. So it stands to reason to have kill-line meticulously murder everything in its sight when given a prefix argument.

(defmacro bol-with-prefix (function)
  "Define a new function which calls FUNCTION.
Except it moves to beginning of line before calling FUNCTION when
called with a prefix argument. The FUNCTION still receives the
prefix argument."
  (let ((name (intern (format "endless/%s-BOL" function))))
       (defun ,name (p)
           "Call `%s', but move to BOL when called with a prefix argument."
         (interactive "P")
         (when p
           (forward-line 0))
         (call-interactively ',function))

And we bind them, of course.

(global-set-key [remap paredit-kill] (bol-with-prefix paredit-kill))
(global-set-key [remap org-kill-line] (bol-with-prefix org-kill-line))
(global-set-key [remap kill-line] (bol-with-prefix kill-line))

With this little macro, C-k still kills from point, but C-3 C-k swallows three whole lines. As a bonus, we get the kill-whole-line behavior by doing C-1 C-k.

Are there any other functions which might benefit from this macro?

Comment on this.

-1:-- Kill Entire Line with Prefix Argument (Post)--L0--C0--October 19, 2014 12:00 AM

Flickr tag 'emacs': スクリーンショット 2014-10-18 17.01.34

zatsu posted a photo:

スクリーンショット 2014-10-18 17.01.34

Emacs 25.0 on OS X 10.10 Yosemite

-1:-- スクリーンショット 2014-10-18 17.01.34 (Post zatsu ( 18, 2014 08:02 AM

Grant Rettke: A must-see of advanced babel usage in org with R

This post is a must-see of advanced babel usage in org with R

-1:-- A must-see of advanced babel usage in org with R (Post Grant)--L0--C0--October 13, 2014 11:36 PM

Phil Hagelberg: in which preconceptions are unavoidable

In my last post I introduced my latest project, a HyperCard clone I've been writing in the Racket programming language, which is a practical and accessible dialect of Scheme. I'd played around a bit with Racket before, but this was the first time I'd used it for anything nontrivial. Any time you come to an unfamiliar language, there's naturally a period of disorientation in which it can be frustrating to find your footing. Racket's similarity to Clojure (the language I'm currently most proficient in) means this shouldn't be as pronounced as it would be with many languages, but these are my subjective reactions to how it's gone implementing my first project. There are a number of gripes, but if I may offer a spoiler, in the end Racket provides satisfactory ways of addressing all of them that aren't obvious up-front, and brings valuable new perspectives to the table.

When I was getting started with Racket, I was pleased to see that it defaults to immutable data structures. Coming from a Clojure background, I'm used to using free-form maps for everything. Racket has hash tables which sound similar, so let's take a look at how they're used:

(define h #hash(('a . 123)
                ('b . 234)
                ('c . (+ 345 456))))

(h 'b)
; application: not a procedure;
;  expected a procedure that can be applied to arguments
;   given: '#hash((a . 123) (b . 234) (c . (+ 345 456)))
;   arguments...:
;    'b

What's going on here? Well, it looks like hash tables can't be called like functions. This has never made any sense to me, since immutable hash tables are actually more like mathematical functions than lambdas are. But whatever, we'll just use hash-ref instead; it's more verbose but should get the job done:

(hash-ref h 'b)

; hash-ref: no value found for key
;   key: 'b

It turns out Racket implicitly quotes everything inside the hash table. So OK, maybe that's a little nicer since you don't need to quote the symbol keys in the hash table:

(define h #hash((a . 123)
                (b . 234)
                (c . (+ 345 456))))

(hash-ref h 'b) ; -> 234
(hash-ref h 'c) ; -> '(+ 345 456)

Oh dear... that's less than ideal, especially compared to Clojure's simple (def h {:a 123 :b 234 :c (+ 345 456)} and (:c h) notation. But let's move on[1] since it turns out hash tables are not nearly as important as maps are in Clojure. It's more idiomatic to use structs if your fields are known up-front:

(struct abc (a b c))
(define s (abc 123 234 (+ 345 456)))

(abc-c s) ; -> 801
s ; -> #<abc>

So that's nice in that it avoids the implicit quoting; our regular evaluation rules work at least. But what's this at the end? Racket structs default to being opaque. This may have made sense years ago when you needed to protect your mutable fields, but now that immutability is the default, it just gets in the way. Luckily you can set the #:transparent option when defining structs, and this will likely become the default in the future.

One place where Racket has a clear advantage over Clojure is that you'll never get nil back from an accessor. Both in hash tables and structs, if a field doesn't exist, you'll get an error immediately rather than allowing bogus data to percolate through your call chain and blow up in an unrelated place. (Though of course with hash tables you can specify your own value for the "not found" case.) In any case, less "garbage in, garbage out" is a welcome change for me as a human who frequently makes mistakes.

What about updates, though? Mutable struct fields have setter functions auto-generated, but inexplicably the nondestructive equivalents for immutable fields are missing. Instead the struct-copy macro is recommended. Here we change the b field of an abc struct instance we've defined:

(define s2 (struct-copy abc s [b 987]))
(abc-b s2) ; -> 987

This works, though you have to repeat the struct type in the invocation. That's not so bad, but the bigger problem is that this is a macro. The field you wish to update must be known at compile time, which makes it awkward to use in the context of higher order functions.

At this point the post is surely sounding pretty whiny. While the out-of-the-box experience working with these data structures is not great, Racket gives you plenty of power to make things better. Probably the most comprehensive take on this I've seen is Rackjure, which gives you a lot of the creature comforts I've noted as missing above like nicer hash table syntax and data structures you can call like functions, as well as a few other niceties like a general-purpose equality predicate[2] and atomic swap for boxes.

In my initial exploration of Racket, I resisted the temptation to dive straight into Rackjure in order to give "raw Racket" a fair shakedown. Because of this, I've spent more time looking into structs and some of the options they provide. Racket has the notion of interfaces you can conform to in order to get generic functionality specialized to a certain struct type. Dictionaries are one of the interfaces it ships with out of the box, so you can use dict-ref, dict-set, etc with hash-tables and other built-in types that conform to this interface. Your typical structs won't work with it, but you can declare structs that implement it without too much fuss. I've done this with my fstruct macro:

(fstruct fabc (a b c)) ; define a struct type with three fields
(define fs (fabc 123 234 (+ 345 456)))

(dict-ref fs 'a) ; -> 123
(dict-set fs 'b 999) ; -> (fabc 123 234 801)
(dict-update fs 'c (λ (x) (- x 400))) ; -> (fabc 123 234 401)

One gotcha if you're used to Clojure is that dict-update is not variadic—if you provide a fourth argument it will be used as a "not found" value rather than as an argument to the updater function. (dict-update fs 'c - 400) won't work. However, unlike Clojure, Racket can do reverse partial application, so (rcurry - 400) does the job, which is nicer than spelling out the lambda form fully.

Another gotcha is that dict-update doesn't appear to have a nested equivalent. For instance; it would be nice to be able to pass an updater function and a "key path" to a specific value in a tree of dictionaries:

(define inner (fabc '(a b c) 0 0))
(define ht-nest `#hash((key1 . ,inner)
                       (key2 . #f)))
(define outer (fabc 0 ht-nest '(1 2 3)))

(define updated (dict-update-in outer '(b key1 a) append '(d e f)))

(dict-ref-in updated '(b key1 a)) ; -> '(a b c d e f)

So that is easy to add:

(define (dict-update-in d ks f . args)
  (if (empty? (rest ks))
      (dict-update d (first ks) (λ (x) (apply f x args)))
      (dict-set d (first ks) (apply dict-update-in
                                    (dict-ref d (first ks))
                                    (rest ks) f args))))

(define (dict-ref-in d ks)
  (if (empty? (rest ks))
      (dict-ref d (first ks))
      (dict-ref-in (dict-ref d (first ks)) (rest ks))))

The fstruct macro has one more trick up its sleeve. The structs it generates are applicable just like Clojure maps:

(define fs2 (fabc 123 (fabc 234 345 (fabc 987 654 321)) 0))

(fs2 'a) ; -> 123
(fs2 '(b b)) ; -> 345
(fs2 'c 9) ; -> (fabc 123 (fabc 234 345 (fabc 987 654 321)) 9)
(fs2 '(b c a) 0) ; -> (fabc 123 (fabc 234 345 (fabc 0 654 321)) 0)
(dict-update-in fs2 '(b b) + 555)
; -> (fabc 123 (fabc 234 900 (fabc 987 654 321)) 0)

They support nested lookups and setting out of the box, but of course for expressing updates that are a function of the old value to the new value you'll have to use dict-update or dict-update-in. My primary project at the moment has a deeply-nested state fstruct that contains hash-tables which contain fstructs, so being able to use a single dict-update-in which operates across multiple concrete types is very convenient.

Finally, while I prefer pure functions for as much of the logic as I can, the outer layer requires tracking state and changes to it. Racket provides the box type for this, which is equivalent to the atom of Clojure. Unfortunately while it provides the same compare-and-swap atomicity guarantees, it only exposes this via the low-level box-cas! function. Oh well, functional swap! which operates in terms of the old value is easy to implement on our own or steal from Rackjure:

(define (swap! box f . args)
  (let [(old (unbox box))]
    (or (box-cas! box old (apply f old args))
        (apply swap! box f args))))

(define b (box 56))

(box-cas! b 56 92) ; -> b now contains 92

(swap! b + 75) ; -> b now contains 167

The HyperCard clone I wrote about in my last post consists of a number of modes that define handlers that can update the state based on clicks. The handlers are all functions that take and return a state fstruct and are called via the swap! function. This allows the bulk of the code to be written in a pure fashion while keeping state change constrained to only two outer-layer mouse and key handler functions. The actual box containing the state never leaves the main module.

Racket has top-notch support for contracts that can describe the shape of data. In this case rather than attaching contracts to functions scattered all over the codebase, I attach them only to the box that contains the state struct, and any time there's a type bug it's usually immediately apparent what caused the trouble. For instance, I have a contract that states that the "corners" field of each button must be a list of four natural numbers, but I've made a change which causes one of them to be negative:

now: broke its contract
   promised: natural-number/c
   produced: -23
   in: the 3rd element of
       the corners field of
       an element of
       the buttons field of
       the values of
       the cards field of
       the stack field of
       the content of
       (box/c (struct/dc state
                         (card string?)
                         (stack (struct/dc stack ...))))

It's pretty clear here that I've made a miscalculation in the button coordinates. If you use DrRacket, the IDE that ships with Racket, you get a very slick visual trace leading you directly to the point at which the contract was violated. While it would be possible to gain more certainty about correctness at compile time by using Typed Racket, contracts let me define the shape of the data in a single place rather than annotating every function that handles state.

While I'd certainly be happy if Racket accomodated some of these functional programming idioms in a more streamlined way out of the box, it speaks volumes that I was able to make myself quite comfortable on my own only a week or two after beginning with the language[3]. It's interesting to note that all of the places in which Clojure has the edge[4] over Racket (with the conspicuous exception of equality) lie around succinctness and greater expressivity, while Racket's advantages are all around improved correctness and making mistakes easier to prevent and detect.

[1] It's possible to perform evaluation inside hash literal syntax by using backticks: `#hash((a . ,(+ 5 6 7)) does what you expect. It's better than nothing, but that's a lot of noise to express a simple concept. In practice, you don't really use the literal notation in programs; you just call the hash function.

[2] I've blogged about why egal is such a great equality predicate. Racket ships with a bewildering array of equality functions, but in functional code you really only need this one (sadly absent from the core language) for 98% of what you do.

[3] With one exception: Racket's macro system is rather intimidating coming from Clojure. Its additional complexity allows it to support some neat things, but so far I haven't gotten to the point where I'd understand why I'd want to do those kinds of things. In any case, I got help from Greg Hendershott to turn the fstruct macro into something properly hygenic.

[4] This ignores the advantages conferred by the respective implementations—Clojure has significantly better performance and access to libraries, while Racket's compiler/debugger/editor tools are much more polished, and its resource consumption is dramatically lower.

-1:-- in which preconceptions are unavoidable (Post Phil Hagelberg)--L0--C0--October 13, 2014 08:50 PM

肉山博客: Gnus:用 GPG 加密邮件

这周四(2014-10-09)在公司同事 Jack 的帮助下,成功地用 Gnus 发送了加密邮件。

1 流程

Gnus 自带对 GPG 的支持,所以一旦 Gnus 配置成功(见 2.2 ),给邮件加密很容易:

  • C-x m (compose-mail),开一封新邮件
  • C-c C-m C-e (mml-secure-message-sign-encrypt)


    <#secure method=pgpmime mode=signencrypt>

    解释一下 mode=signencrypt

    • sign :用发送者(你)的私钥签名,所以接收者知道邮件确实是你发的
    • encrypt :用接受者的公钥加密,所以邮件只有接受者能解密
  • 写完邮件, C-c C-c (message-send-and-exit) 发送

2 配置

2.1 用 GPG 生成公钥私钥,加载其他人的公钥

不赘述了,直接看 manual 就行,或着搜索相关教程。

2.2 配置 Gnus

我没有自己的邮件服务器,用的是谷歌的 Gmail。

Gnus 配置则基本是跟着陈斌兄的 Practical guide to use Gnus with Gmail 来的,简单实用。

3 我的公钥

Version: GnuPG v1


-1:-- Gnus:用 GPG 加密邮件 (Post Wenshan)--L0--C0--October 12, 2014 12:59 PM

Flickr tag 'emacs': How do I blog with org2blog

Thiago Perrotta posted a video:

How do I blog with org2blog

How do I blog with org2blog and emacs.

-1:-- How do I blog with org2blog (Post Thiago Perrotta ( 11, 2014 02:54 PM

Tom Tromey: Emacs hint for Firefox hacking

I started hacking on firefox recently. And, of course, I’ve configured emacs a bit to make hacking on it more pleasant.

The first thing I did was create a .dir-locals.el file with some customizations. Most of the tree has local variable settings in the source files — but some are missing and it is useful to set some globally. (Whether they are universally correct is another matter…)

Also, I like to use bug-reference-url-mode. What this does is automatically highlight references to bugs in the source code. That is, if you see “bug #1050501″, it will be buttonized and you can click (or C-RET) and open the bug in the browser. (The default regexp doesn’t capture quite enough references so my settings hack this too; but I filed an Emacs bug for it.)

I put my .dir-locals.el just above my git checkout, so I don’t end up deleting it by mistake. It should probably just go directly in-tree, but I haven’t tried to do that yet. Here’s that code:

 ;; Generic settings.
 (nil .
      ;; See C-h f bug-reference-prog-mode, e.g, for using this.
      ((bug-reference-url-format . "")
       (bug-reference-bug-regexp . "\\([Bb]ug ?#?\\|[Pp]atch ?#\\|RFE ?#\\|PR [a-z-+]+/\\)\\([0-9]+\\(?:#[0-9]+\\)?\\)")))

 ;; The built-in javascript mode.
 (js-mode .
     ((indent-tabs-mode . nil)
      (js-indent-level . 2)))

 (c++-mode .
	   ((indent-tabs-mode . nil)
	    (c-basic-offset . 2)))

 (idl-mode .
	   ((indent-tabs-mode . nil)
	    (c-basic-offset . 2)))


In programming modes I enable bug-reference-prog-mode. This enables highlighting only in comments and strings. This would easily be done from prog-mode-hook, but I made my choice of minor modes depend on the major mode via find-file-hook.

I’ve also found that it is nice to enable this minor mode in diff-mode and log-view-mode. This way you get bug references in diffs and when viewing git logs. The code ends up like:

(defun tromey-maybe-enable-bug-url-mode ()
  (and (boundp 'bug-reference-url-format)
       (stringp bug-reference-url-format)
       (if (or (derived-mode-p 'prog-mode)
	       (eq major-mode 'tcl-mode)	;emacs 23 bug
	       (eq major-mode 'makefile-mode)) ;emacs 23 bug
	   (bug-reference-prog-mode t)
	 (bug-reference-mode t))))

(add-hook 'find-file-hook #'tromey-maybe-enable-bug-url-mode)
(add-hook 'log-view-mode-hook #'tromey-maybe-enable-bug-url-mode)
(add-hook 'diff-mode-hook #'tromey-maybe-enable-bug-url-mode)
-1:-- Emacs hint for Firefox hacking (Post tom)--L0--C0--October 09, 2014 07:20 PM

Phil Hagelberg: in which cards are stacked

My childhood summers involved many days spent building out expansive HyperCard stacks to explore in a game world that spanned across cities and islands and galaxies, littered with creatively absurd death scenes to keep you on your toes. The instant accessibility of HyperCard invited you to explore and create without feeling intimidated. Especially for younger minds, I believe there's something fundamental about the spatial aspect of HyperCard. Every button has its place, and rather than creating an abstract hierarchy of classes or a mesh of interconnected modules, you can see that buttons have a specific place on a card, which exist "inside" a stack in a metaphor that maps to the way we already perceive the world.

you know, for kids

While Apple killed HyperCard off many years ago, there exist tools for children today that maintain this accessible spatial arrangement. I've written before about how my kids love playing with Scratch, a Logo-descendant providing colorful sprites and drag-and-drop scripts to animate them. While this is unbeatable for the early years, (especially for kids who are only just beginning to learn to read) eventually you hit an abstraction ceiling beyond which it becomes very tedious to express your ideas due to its non-textual nature. There are modern HyperCard clones like LiveCode, which offers a very sophisticated platform, but falls prey to the same tragic pitfall of attempting to build an "English-like" programming language, an endeavour which has been attempted many times but always ends in tears.

myst island

So as I've thought a good next step for my own children, we happened to start playing the game Myst. Given my children's proclivities, this was immediately followed by Myst copycat worlds drawn out in elaborate detail with pen and paper the next day. I thought of what I could do to bring back the exploration and creativity of HyperCard, and eventually I got to building my own implementation.

The natural choice for this project was definitely the Racket language. While I'm much more familiar with Clojure, it's a very poor fit for a first-time programmer, especially at a young age. Racket boasts lots of great learning material, but beyond the texts there's just an ever-present focus on usability and clarity that shines through in all aspects of the language and community.

drracket error

Racket's roots lie with the famously-minimalistic Scheme, but it's grown to be a much more practical, expansive programming language. While there are still a few places in which its imperative roots show through, it has good support for functional programming and encourages good functional practice by default for the most part. (I hope to do a future post on a few ways to improve on its FP, both via the Rackjure library and some of my own hacks.) But what really sets Racket apart is its solid tooling and libraries. I wouldn't put my kids down in front of Emacs, but Racket ships with a very respectable IDE that's capable and helpful without being overwhelming. The GUI and drawing libraries that come with Racket have proven to be very useful and approachable for what I've done so far.

So far Cooper, my HyperCard clone, is fairly simplistic. But in just over 500 lines of code, I have a system that supports manipulating stacks of cards, drawing backgrounds on them, and laying out buttons which can either navigate to other cards or invoke arbitrary functions.

It's not sophisticated, but my children have already shown plenty of interest in using it to build out small worlds of their own in the cards. They've so far been willing to put up with lots of glitches and poor performance to bring their imaginations to life, and I'm hoping that this can gradually grow into a deeper understanding of how to think in a logical, structured way.

-1:-- in which cards are stacked (Post Phil Hagelberg)--L0--C0--October 07, 2014 10:25 PM

Ben Simon: Miva Merchant for Code Geeks; Or how I fought against the default Miva dev model and sort of won

Miva Merchant? Let's Do it!

I've got a relatively new client who's current platform is MivaMerchant. While he's open to switching to a new platform, I'm hesitant to do so without a firm grasp of his business and what the pros/cons of Miva are. So for now, I need to become best buds with Miva.

Doing a few Google searches turned up, which implies that Miva is, well, scriptable. And while the syntax appears to be on the clunky side (it's XML based, so that is to be expected), it does look relatively complete.

As I dove into my first few projects, I started to come up to speed on Miva. The video tutorials are well done, the admin tool quite complete, and template language (audaciously named Store Morph Technology aka SMT) looked promising.

But wait...

As I started to truly build out my projects I kept running into two questions: (1) where's the flat file interface for the templates, and (2) when do I get to start using MivaScript?

While the admin UI is nice and all (there's a basic version control system built in), it's hardly the ideal environment to work on large template files (which are a mix of HTML and special tags). Or put more bluntly, I want to use emacs to edit content, not a text area in Firefox. Surely I was missing something. Finally I broke down and filed a ticket on the topic. The tech rep said she'd call me to discuss; and sure enough she did. She explained to me that you *have* to use the admin UI, there is no file interface. Apparently, the Miva techs work by copying and pasting content into a text editor, making changes, and copying and pasting it back.

While the tech was very nice (and again, promptly called me), surely she must have just been misinformed. Miva can't expect professional programmers to build sophisticated stores by copying and pasting code. Can they? Even if I'm OK with all this copying and pasting, how am I supposed to use a real version control system or do an automated deployment, if all the source code needs to live in an HTML admin tool?

I took my quandary further up the Miva chain. (Which again, I give them credit for having as an option.) I finally spoke to a senior developer and he told me that the tech's understanding is correct. They are aware of the limitations of the admin UI, but it's the only way they currently support using Miva. I got the impression that future versions of the product may help address this issue.

OK, I'll learn to love the admin UI. My work around for now is to keep a version of the templates found in Miva in a local source repository. The content of the PROD page for example, is stored in: admin/Pages/PROD/Page.smt.

And it gets worse

Issue number (2), however, points to a potentially larger concern. After a bunch of research I learned that Miva Merchent itself is written in MivaScript< and while the templates themselves understand SMT tags, they don't process MivaScript. In other words, your typical MivaMerchant developer never writes or even encounters MivaScript. Fair enough. But how do they modularize code? That is, how do I void duplicating code in templates, and how do I keep my templates from growing out of control? Looking at the template language of choice, there didn't seem to be a way to include code. Without a basic include statement, how can I achieve any sort of modularity?

I ran these questions by the nice tech that called me. The short answer was, there is no way to do this. At the senior tech level, he mentioned something about setting up custom items that would appear in the UI as their own editable regions, and then I could include those items at will. This is somewhat promising, but that means embracing the the admin UI even more.

All this left me pretty disappointed with Miva. There simply had to be a way to make it more developer friendly.

A first attempt at a fix

If the standard set of tools weren't going to give me what I wanted, what about building out a custom extension or two? I make custom plugins in WordPress all the time, how hard could it be to do in Miva? This seemed promising, as extensions are written in MivaScript, so I'd get power of a full programming language instead of making do with an admin UI and the basic SMT language. Alas, after a few days of analyzing extensions (you can see lots of examples in the source for MivaMerchant provided here (and you'll need the compiler here), I finally had to throw in the towel. Between the fact that MivaScript is a language unto itself, and my lack of understanding of how MivaMerchant operates, I'm just not ready to create extensions for it. I had to surrender.

And the real fix

But still, I wasn't ready to give up on my dream of a move developer friendly Miva experience. And then I found the missing piece of the puzzle: the External File Pro v5+ module. This bad boy allows you to invoke a MivaScript function in an arbitrary file, and it places the result in an SMT file of your choice.

Let's take a specific example to see how this module can save the day. Suppose you want to customize your title tag depending your product name. You could head over to the PROD page and put in some code like this:

<mvt:if expr="l.all_settings:product:name EQ 'Iguana'>
  <title>Love Lizards! Buy an Iguana Today!</title>
<mvt:if expr="l.all_settings:product:name EQ 'Turtle'>
  <title>Totally Turtles! Buy an Turtle Today!</title>

But, that code is messy and hard to maintain. Ideally, it would live in its own function, in its own file. That's where External File Pro comes.

The first order of business is to buy and install the External File Pro v5+. Once you've enabled the module and associated it with the PROD page you're ready to get to work.

I created a MivaScript source file: /mm5/snippets/seo/ The path is totally arbitrary. I plan to organize all these includes as snippets, and this particular one is related to SEO. Hence the naming. Note, the source file is .mv. The actual MivaScript file looks something like this:

  Generate our Page Title in a smart way. If we can make a better title than the default one, then
  go for it.

<MvFUNCTION NAME = "ADS_External_File" PARAMETERS = "module var, item, all_settings var, settings var, ignored" STANDARDOUTPUTLEVEL = "text, html, compresswhitespace">
  <MvIF EXPR = "{ l.all_settings:product:name EQ 'Iguana' }">
    <MvEVAL EXPR = "{ '<title>' $ 'Love Lizards! Buy a ' $ l.all_settings:product:name . ' Today!' $ '</title>' }"/>

  <MvIF EXPR = "{ l.all_settings:product:name EQ 'Turtle' }">
    <MvEVAL EXPR = "{ '<title>' $ 'Totally Turtles! Buy a ' $ l.all_settings:product:name . ' Today!' $ '</title>' }"/>

  <MvEVAL EXPR = "{ '<title>' $  $ l.all_settings:product:name $ '! Buy one Today!' $ '</title>' }"/>

Notice that the function is named ADS_External_File. That's a requirement of the External File Pro plugin, and serves as the entry point to our snippet.

This file needs to be compiled to a .mvc file. I'm using a simple Makefile to accomplish this:

## Makefile for building our MivaScript function files

export PATH := /c/tools/miva/msc/BIN:$(PATH)
RM  = rm
MVC = mvc.exe  -B 'c:\tools\miva\msc\BUILTINS'
SNIPPETS = seo/title
SRCS     = $(addsuffix .mv, $(SNIPPETS)))
COMPILED = $(addsuffix .mvc, $(SNIPPETS))

all : $(COMPILED)

%.mvc :
 $(MVC) $<

clean :
 $(RM) -f $(COMPILED)

With the command:

  make ;  sitecopy -u

The snippet is compiled and pushed to the server.

Finally, in the PROD Page template I put in a call to the snippet:

  <mvt:item name="ads-extfile" param="function|/mm5/snippets/seo/title.mvc"/>

And I'm done.


I've now got the ability to create arbitrary functions in MivaScript and pull their contents into an a template. This gives me the modularity and maintainability I was after, and opens the door to streamlining development of complex functionality. Looks like Miva and I may indeed be best buds after all.

-1:-- Miva Merchant for Code Geeks; Or how I fought against the default Miva dev model and sort of won (Post Ben Simon)--L0--C0--October 07, 2014 08:44 AM

Dev and Such [Emacs Category]: New and Improved Emacs Config!

I finally did it. I moved my Emacs configuration into a .org file.

You can see the results here on GitHub. There's not much else to say in this post, honestly. The configuration pretty much speaks for itself.

Thanks to Sacha Chua for the inspiration.

-1:-- New and Improved Emacs Config! (Post)--L0--C0--October 06, 2014 09:20 PM

Bryan Murdock: SystemVerilog Streaming Operator: Knowing Right from Left

SystemVerilog has this cool feature that is very handy for converting one type of collection of bits into another type of collection of bits.  It's the streaming operator.  Or the streaming concatenation operator.  Or maybe it's the concatenation of streaming expressions (it's also called pack/unpack parenthetically).  Whatever you want to call it, it's nice.  If you have an array of bytes and you want to turn it into an int, or an array of ints that you want to turn into an array of bytes, or if you have a class instance that you want to turn into a stream of bits, then streaming is amazing.  What used to require a mess of nested for-loops can now be done with a concise single line of code.

As nice as it is, getting the hang of the streaming operator is tough.  The SystemVerilog 1800-2012 LRM isn't totally clear (at least to me) on the details of how they work.  The statement from the LRM that really got me was this, "The stream_operator << or >> determines the order in which blocks of data are streamed: >> causes blocks of data to be streamed in left-to-right order, while << causes blocks of data to be streamed in right-to-left order."  You might have some intuitive idea about which end of a stream of bits is on the "right" and which is on the "left" but, I sure didn't.  After looking at the examples of streaming on page 240 of the LRM I thought I had it, and then none of my attempts to write streaming concatenations worked like I thought they should.  Here's why: "right" and "left" are different depending on whether your stream of bits is a packed array or an unpacked array.

As far as I can tell, "right" and "left" are in reference to the literal SystemVerilog code representations of an arrays of bits.  A literal packed array is generally written like this:

bit [7:0] packed_array = 8'b0011_0101;

And packed_array[0] is on the right (that 1 right before the semicolon).  A literal unpacked array is written like this:

bit unpacked_array[] = '{1'b1, 1'b0, 1'b1, 1'b0};

unpacked_array[0] is on the left (the first value after the left curly brace).  I don't know about you, but I'm generally more concerned with actual bit positions, not what is to the right and left in a textual representation of an array, but there you have it.

Once I got that down, I still had problems.  It turns out the results of streaming concatenations will be different depending on the variable you are storing them in.  It's really the same right/left definitions coming into play.  If you are streaming using the right-to-left (<<) operator, the right-most bit of the source will end up in the left-most bit of the destination.  If your destination is a packed array then, just as I explained above, "right" means bit zero and left means the highest index bit.  If, your destination is an unpacked array, your right-most source bit will end up as bit zero of the unpacked array (which is the "right" bit according to the literal representation).

Got all that?  If not, I put a code example on edaplayground that you can run and examine the output of.  The examples are all streaming bits one at a time.  It gets a little harder to wrap your head around what happens when you stream using a slice_size and when your source and/or destinations array is an unpacked array of bytes or ints. I'll write another post explaining some tricks for those next (UPDATE: next post is here).

-1:-- SystemVerilog Streaming Operator: Knowing Right from Left (Post Bryan ( 06, 2014 07:38 PM

Bryan Murdock: More SystemVerilog Streaming Examples

In my previous post I promised I would write about more interesting cases of streaming using a slice_size and arrays of ints and bytes.  Well, I just posted another set of streaming examples to edaplayground.  I'm going to mostly let you look at that code and learn by example, but I will take some time in this post to explain what I think is the trickiest of the conversions.  When you go to edaplayground, choose the Modelsim simulator to run these.  Riviera-PRO is currently the only other SystemVerilog simulator choice on edaplayground, and it messes up on the tricky ones (more on that in a bit).

These examples demonstrate using the streaming operator to do these conversions:
  • unpacked array of bytes to int
  • queue of ints to queue of bytes
  • queue of bytes to queue of ints
  • int to queue of bytes
  • class to queue of bytes
Of all those examples, the queue of ints to queue of bytes and the queue of bytes to queue of ints are the tricky ones that I want to spend more time explaining.  They are both tricky for the same reason.  If you are like me, your first thought on how to convert a queue of ints to a queue of bytes is to just do this:

byte_queue_dest = {<< byte{int_queue_source}};

Before I explain why that might not be what you want, be sure you remember what "right" and "left" mean from my previous post.  The problem with the straightforward streaming concatenation above is it will start on the right of the int queue (int_queue_source[max_index], because it's unpacked), grab the right-most byte of that int (int_queue_source[max_index][7:0], because the int itself is packed), and put that byte on the left of byte_queue_dest (byte_queue_dest[0], because it is unpacked).  It will then grab the next byte from the right of the int_queue (int_queue_source[max_index][15:8]) and put it in the next position of byte_queue_dest (byte_queue_dest[1]), and so on.  The result is that you end up with the ints from the int queue reversed in the byte queue.  If that doesn't make sense, change the code in the example to the above streaming concatenation and just try it.

To preserve the byte ordering, you do this double streaming concatenation:

byte_queue_dest = {<< byte{ {<< int{int_queue_source}} }};

Let's step through this using the literal representation of the arrays so that rights and lefts will be obvious.  You start with an (unpacked, of course) queue of ints:

int_queue_source = {'h44332211, 'h88776655};

And just to be clear, that means int_queu_source[0][7:0] is 'h11 and we want that byte to end up as byte_queue_dest[0].  The inner stream takes 32-bits at a time from the right and puts them on the left of a temporary packed array of bits.  That ends up looking like this:

temp_bits = 'h8877665544332211;

Now the outer stream takes 8 bits at a time from the right of that and puts them on the left of a queue.  That gives you this in the end:

byte_queue_dest = {'h11, 'h22, 'h33, 'h44, 'h55, 'h66, 'h77, 'h88};

Which, if you wanted to preserve the logical byte ordering, is correct.  Going from bytes to ints, it turns out, is pretty much the same: reverse the queue of bytes and then stream an int at a time.

So what happens with Riviera-PRO?  If you try it in edaplayground you see that the resulting queue of bytes in the int-to-byte conversion ends up with a whole bunch of extra random bytes on the right (highest indexes of the queue).  8 extra, to be exact.  Same for the int queue result in the byte-to-int conversion.  I think Riviera-PROP must be streaming past the end of the temporary packed array (that I called temp_bits above) of pulling in bytes from off in the weeds.  Pretty crazy.  That's all done behind the scenes so I don't really know, but that's sure what it looks like.  Hopefully they can fix that soon.

Well, I hope I've helped clear up how to use the streaming operators for someone.  If I haven't, I have at least helped myself understand them better.  Ask any questions you have in the comments.
-1:-- More SystemVerilog Streaming Examples (Post Bryan ( 06, 2014 07:37 PM

肉山博客: Hack 记录:emacs-moz-controller 的创造过程

:这是我在写 moz-controller 时的记录,因为是写给自己看的,所以思维比较跳跃,文字也没有雕琢,只是希望能为 Emacs Lisp 爱好者提供一些参考价值。

《在 Emacs 中控制 Firefox 翻页、刷新、关闭页面》 的时候, 感觉这些有共性的函数, 可以放到一个插件里. 正好我还没有从头到尾写过 Emacs 插件, 这也是个练手的机会.

1 初步想法

依赖于: mozrepl plugin, Emacs moz-repl (在 README 里说明)


说明 函数名 快捷键 (以 C-c m 开头)
refresh   r
close tab   k
scroll down/up   n/p
previsou/next tab   l/f
zoom in/out   +/-


  • global-moz-controller-mode: 全局开启这个模式, 默认为 nil, 看一下别人是怎么实现的
  • keymap: 键盘布局
  • require moz-repl: 依赖
  • 提供一个 hook
  • (provide ‘moz-controller)
  • 放 github 上
  • 在 el-get, melpa 等发布
  • 在 emacswiki, g+, twitter, HN, reddit, 微博, 豆瓣上面宣传一下
  • 先实现想要的功能, 再写成一个 emacs 插件

2 Get it Run First

2.1 上/下一个标签页

跟李总找了半天, JavaScript console 下好像没有相应的命令, 我看了 KeySnail 的代码, 切到前一个标签页是:

getBrowser().mTabContainer.advanceSelectedTab(-1, true);

但是这个在 console 运行会返回一个错误, 放个书签 , 先弄2.2吧… 缩放弄好了, 现在跳回来继续搞标签页.

在 MozRepl 试试, 先用 isend 绑定一下, 以便给 MozRepl 发送代码 (参照: M-x isend-associate *MozRepl*

getBrowser().mTabContainer.advanceSelectedTab(-1, true);



getBrowser().mTabContainer.advanceSelectedTab(1, true);

也好使, 赞赞赞, 我爱 Emacs !!!


(defun moz-tab-previous ()
  "Switch to the previous tab"
  (moz-send-command "getBrowser().mTabContainer.advanceSelectedTab(-1, true);")

(defun moz-tab-next ()
  "Switch to the next tab"
  (moz-send-command "getBrowser().mTabContainer.advanceSelectedTab(1, true);")


我想要的功能都有了, 可以开始写插件了.

但是发现这些函数都是 defun 跟函数名, 跟文档, 跟 (interactive), 最后 (moz-send-command “function”). 感觉可以写个 macro 了, 学了这个锤子之后, 还没用它钉过钉子. 先把代码 “优化” (很多人觉得 macro 应该尽量少用, 我自己对编程语言学没有研究, 不持任何意见) 一下吧: 3

2.2 缩放

#emacs 上的 average 给了我不少帮助, 告诉我如何通过 MozRepl 对页面进行缩放, 给我推荐了进一步了解 MozRepl 的几个链接, 还顺带推荐了, livestreamer 和 DownloadHelper 等几个很有用的网站和工具.


gBrowser.selectedBrowser.markupDocumentViewer.fullZoom = <wanted value>

这个也不能在 JavaScript console 里运行, 但 MozRepl 的功能是超过 console 的.

根据这个, 写了三个函数, 放大, 缩小, 复原.

(defun moz-zoom-in ()
  "Zoom in"
  (moz-send-command "gBrowser.selectedBrowser.markupDocumentViewer.fullZoom += 0.1;"))

(defun moz-zoom-out ()
  "Zoom out"
  (moz-send-command "gBrowser.selectedBrowser.markupDocumentViewer.fullZoom -= 0.1;"))

(defun moz-zoom-reset ()
  "Zoom in"
  (moz-send-command "gBrowser.selectedBrowser.markupDocumentViewer.fullZoom = 1"))

可以用, 但是发现一旦切换到其他标签页再切换回来, 缩放效果就没有了. 先这样吧, 有时间再改. 现在知道了 MozRepl 可以一些执行 console 没法执行的命令, 感觉在 Emacs 中让 Firefox 切换标签页的功能也可以实现. 回去继续弄标签页 2.1.

3 代码优化

想法主要是用 macro 把代码简化一下, macro 在很多文章和书籍中都看到过, 但这还是我头一回在实际中使用.

先看一下说明, C-h f defmacro:

defmacro is a Lisp macro in `byte-run.el'.

(defmacro NAME ARGLIST &optional DOCSTRING DECL &rest BODY)

Define NAME as a macro.
When the macro is called, as in (NAME ARGS...),
the function (lambda ARGLIST BODY...) is applied to
the list ARGS... as it appears in the expression,
and the result should be a form to be evaluated instead of the original.
DECL is a declaration, optional, of the form (declare DECLS...) where
DECLS is a list of elements of the form (PROP . VALUES).  These are
interpreted according to `macro-declarations-alist'.
The return value is undefined.

然后在 el-get 下载的代码中 grep 一下 defmacro, 看看别人是怎么用的. 找到一个 paredit 下的例子, 感觉跟我要做的事情差不多, 都是给 defun 套一层壳:

  (defmacro defun-saving-mark (name bvl doc &rest body)
    `(defun ,name ,bvl
       ,(xcond ((paredit-xemacs-p)
                '(interactive "_"))

照虎画猫, 又看了 elisp 的 info, 鼓捣了一段时间才搞定:

(defmacro defun-moz-command (name arglist doc &rest body)
  "Macro for defining moz commands.  Pass in the desired
JavaScript expression as BODY."
  `(defun ,name ,arglist
      (car (quote ,body)))

然后改写之前的命令, moz-send-command 可以不要了:

(defun-moz-command moz-reload-browser ()
  "Refresh current page"
  "setTimeout(function(){content.document.location.reload(true);}, '500');"

(defun-moz-command moz-page-down ()
  "Scroll down the current window by one page."

(defun-moz-command moz-page-up ()
  "Scroll up the current window by one page."

(defun-moz-command moz-tab-close ()
  "Close current tab"

(defun-moz-command moz-zoom-in ()
  "Zoom in"
  "gBrowser.selectedBrowser.markupDocumentViewer.fullZoom += 0.1;"

(defun-moz-command moz-zoom-out ()
  "Zoom out"
  "gBrowser.selectedBrowser.markupDocumentViewer.fullZoom -= 0.1;"

(defun-moz-command moz-zoom-reset ()
  "Zoom in"
  "gBrowser.selectedBrowser.markupDocumentViewer.fullZoom = 1"

(defun-moz-command moz-tab-previous ()
  "Switch to the previous tab"
  "getBrowser().mTabContainer.advanceSelectedTab(-1, true);"

(defun-moz-command moz-tab-next ()
  "Switch to the next tab"
  "getBrowser().mTabContainer.advanceSelectedTab(1, true);"

可读性…好像不是很好, 就先这样吧, 也算是用了一次 macro.


(global-set-key (kbd "C-c m n") 'moz-page-down)
(global-set-key (kbd "C-c m p") 'moz-page-up)
(global-set-key (kbd "C-c m k") 'moz-tab-close)
(global-set-key (kbd "C-c m +") 'moz-zoom-in)
(global-set-key (kbd "C-c m -") 'moz-zoom-out)
(global-set-key (kbd "C-c m 0") 'moz-zoom-reset)
(global-set-key (kbd "C-c m l") 'moz-tab-previous)
(global-set-key (kbd "C-c m f") 'moz-tab-next)

现在可以开始进入 写成 emacs 插件 的环节了.

4 写成 Emacs 插件

看了几个视频, 发现自己上面实现的这些功能, 早就有人写过类似的了, 比如

不过没搜到有相应的 Emacs 扩展, 所以还是自己写一个吧.

先看看其他的扩展是怎么写的, 比如 emacs-ctable 和 emacs-edbi (我比较熟悉的扩展).

仿照着先建几个文件, 然后用 magit 初始化:

  • 名字叫 emacs-moz-controller
  • moz-controller.el
  • 中文
  • 英文
  • LICENSE (GPL v3)

然后开写 emacs-moz-controller, 首先我需要让它依赖于 moz-repl, 在注释里说明一下, 然后 (require ‘moz).

然后就参照着 edbi 写了.

macro 增加点儿注释.


macro 跟函数都改名成 moz-controller 开头

弄个 group, 方便 customize (虽然我没用过这个东西), 但好像也没有什么需要 customize 的东西, 可能让用户自定义每次缩放的程度吧, 这个一会儿再说,放个书签

定义一个 autoload 的 minor-mode (这个是跟 org2blog 学的, 也是我比较熟的一个扩展, 目前我是维护者之一, 虽然贡献不大):

(define-minor-mode moz-controller-mode
  "Toggle moz-controller mode.
With no argument, the mode is toggled on/off.
Non-nil argument turns mode on.
Nil argument turns mode off.


Entry to this mode calls the value of `moz-controller-mode-hook'."

  :init-value nil
  :lighter " MozCtrl"
  :group 'moz-controller
  :keymap moz-controller-mode-map

  (if moz-controller-mode
      (run-mode-hooks 'moz-controller-mode-hook)))

看了看文档, 还是不太明白 autoload 的用处.

先写 mode-map 吧, 还是仿照 org2blog 来.

(defvar moz-controller-mode-map nil
  "Keymap for controlling Firefox from Emacs.")

(unless moz-controller-mode-map
  (setq moz-controller-mode-map
    (let ((moz-controller-map (make-sparse-keymap)))
      (define-key moz-controller-map (kbd "C-c m R") 'moz-controller-page-refresh)
      (define-key moz-controller-map (kbd "C-c m n") 'moz-controller-page-down)
      (define-key moz-controller-map (kbd "C-c m p") 'moz-controller-page-up)
      (define-key moz-controller-map (kbd "C-c m k") 'moz-controller-tab-close)
      (define-key moz-controller-map (kbd "C-c m b") 'moz-controller-tab-previous)
      (define-key moz-controller-map (kbd "C-c m f") 'moz-controller-tab-next)
      (define-key moz-controller-map (kbd "C-c m +") 'moz-controller-zoom-in)
      (define-key moz-controller-map (kbd "C-c m -") 'moz-controller-zoom-out)
      (define-key moz-controller-map (kbd "C-c m 0") 'moz-controller-zoom-reset)

然后弄个 hook.

(defvar moz-controller-mode-hook nil
  "Hook to run upon entry into moz-controller-mode.")

这个在 hook 在 define-minor-mode 那儿提到了, 有个 run-mode-hooks 的 sexp

可以开始实验了, 把我 init.el 里定义的相关函数都注释掉, 然后:

(add-to-list 'load-path "~/hack/el/emacs-moz-controller")
(require 'moz-controller)

突然想到自己还想实现一个 global-mode,放一个书签。先继续实验, 有基本的功能再说别的, 重启 emacs (不知道还有没有别的好方法)

然后 M-x moz-controller-mode (输入的过程中发现其他 moz-controller 函数也可以用, 这可能是没有用 autoload 的原因).

然后 mode-line 就出现了 MozCtrl, 这是我在 define-minor-mode 的时候指定的 mode 简称.

然后挨个 C-c m R/n/p/k/b/f/+/-, 所有功能试一遍, 都好使, 高兴!!!

5 global mode (书签:4

现在要弄一个 global-mode, 感觉是把 moz-controller-mode 加到一个全局的什么 hook 里? 看看别人怎么实现的吧, C-h f project-global-mode (projectile 是另外一个我常用的扩展), 然后点开其定义:

(define-globalized-minor-mode projectile-global-mode

good, 已经有现成的 define-globalized-minor-mode 了, 我只要:

(define-globalized-minor-mode moz-controller-global-mode


(defun moz-controller-on ()
  "Enable moz-controller minor mode."
  (moz-controller-mode t))

(defun moz-controller-off ()
  "Disable moz-controller minor mode."
  (moz-controller-mode nil))

(defun moz-controller-global-on ()
  "Enable moz-controller global minor mode."
  (moz-controller-global-mode t)

(defun moz-controller-global-off ()
  "Disable moz-controller global minor mode."
  (moz-controller-global-mode nil)

然后再重启 (麻烦).

M-x moz-controller-global-mode 进行 toggle, 好使!!!

git commit 一下

6 用户自定义缩放程度 (书签:4 )

让用户 customize 缩放程度, 感觉挺简单, 先看看 org2blog 里边的 customize 之类的是怎么弄的吧:

(defcustom org2blog/wp-keep-new-lines nil
  "Non-nil means do not strip newlines."
  :group 'org2blog/wp
  :type 'boolean)

这个是 boolean, 我的应该是 floating number 之类的, 但是从哪儿能知道都什么可以放 :type 里呢? Emacs 中有 booleanp, 用于判断一个值是不是 boolean, 也有 numberp, 用于判断一个值是不是 number, 所以我觉得我用 number 作为 :type 的值就可以了.

(defcustom moz-controller-zoom-step 0.1
  "Zoom step, default 0.1, it is supposed to be a positive number."
  :group 'moz-controller
  :type 'number)

然后修改一下 zoom in 函数:

(defun-moz-controller-command moz-controller-zoom-in ()
  "Zoom in"
  (concat "gBrowser.selectedBrowser.markupDocumentViewer.fullZoom += " (number-to-string moz-controller-zoom-step) ";")

eval 一下, 但是发现这个不好使了:

comint-send-string: Wrong type argument: stringp, (concat "gBrowser.selectedBrowser.markupDocumentViewer.fullZoom += " (number-to-string moz-controller-zoom-step) ";")

应该是 macro 的问题, 去看看 macro 的定义, 知道问题是什么: (car (quote ,body)), 当 body 是 ((concat xxx bbb ccc)) 时, concat 这个语句本身没有执行, 只是当成 symbol 被 car 出来了, 所以会有 “Wrong type argument: stringp” 的错误提示.

可以在前边加个 eval, 但是感觉这样做不太好, 先这么着吧.

(defmacro defun-moz-controller-command (name arglist doc &rest body)
  "Macro for defining moz commands.

NAME: function name.
ARGLIST: should be an empty list () .
DOC: docstring for the function.
BODY: the desired JavaScript expression, as a string."
  `(defun ,name ,arglist
      (eval (car (quote ,body))))


然后再看 defun-saving-mark 这个 macro 的定义, 突然发现 ,@body, 想起来 `@’ 可以用来展开一个 list, 试试:

(defmacro defun-moz-controller-command (name arglist doc &rest body)
  "Macro for defining moz commands.

NAME: function name.
ARGLIST: should be an empty list () .
DOC: docstring for the function.
BODY: the desired JavaScript expression, as a string."
  `(defun ,name ,arglist

好使!!! 搞定了. 再 commit.

基本功能全了. 下一步是 README 和注释的完善了.

7 README 和注释




8 放到 Github

在 Github 上新创建一个 repo, 然后 push 上去, 轻车熟路.

9 加入包管理系统


看了看 el-get, 看了看 org2blog 的 pkg.el, 有点儿头绪, 但不知道从什么地方开始, 去 G+ 上问一下吧, 没准有人能给我写个 pull request 呢 :D

自己用 screenkey + gtk-recordmydesktop 录了个视频, 发到了 youtube 上, 更新了一下 README, 先进入 G+ 的 10 阶段.

fork 了 melpa, 按照说明加了个 recipe 进去, 然后 pull request, Steve Purcell 帮我小小改动了一下, 增加了依赖关系 (moz-repl), 然后我的 pull request 就被 merge 进去了. 然后就可以通过 package 来安装了 :D. 我也相应地更新了 README, 这样一来, 基本就只剩下宣传工作了, 回到10.

10 宣传

先放在 G+ 上:, 等一两天, 看看有什么回应.

收获了几个赞, 发现有人帮我放到了 reddit, 又收获了几颗星(github上).

在微博上问了一下陈斌, 他说可以自己放到 melpa 之类的包管理系统, 做个标记 , 然后回到 9

在豆瓣和微博上都发了一下, 但按照以往经验, 反响不会太高.

然后把 README 稍微改改发到博客上吧, 算是对 moz-controller 的介绍.

等明天晚上发到 hackernews 上去.

11 新功能

使用的过程中, 发现有时候希望能得到 MozRepl 的输出, 比如当前标签页的地址, 网上搜搜, 看到 stackoverflow 上的一个讨论, 试了试, 发现 gBrowser.contentWindow.location.href 可以. 但是我目前定义的这个 macro, 是只负责输入不负责读取输出的.

C-h f 看了好几个 comint 相关的函数, 但是还是不得要领.

继续去 #emacs 问问吧,没什么结果。

后来在 Stackoverflow 上开了个问题: Emacs: what is the conventional way of receiving output from a process? (

经指点,看 info (elisp) Filter Functions 一节

A process “filter function” is a function that receives the standard output from the associated process. All output from that process is passed to the filter. The default filter simply outputs directly to the process buffer.


(defun ordinary-insertion-filter (proc string)
  (when (buffer-live-p (process-buffer proc))
    (setq wenshan-de-string string)
    (with-current-buffer (process-buffer proc)
      (let ((moving (= (point) (process-mark proc))))
          ;; Insert the text, advancing the process marker.
          (goto-char (process-mark proc))
          (insert string)
          (set-marker (process-mark proc) (point)))
        (if moving (goto-char (process-mark proc)))))))

(set-process-filter (get-buffer-process "*MozRepl*") 'ordinary-insertion-filter)

(process-filter (get-buffer-process "*MozRepl*"))

;; 这样就可以得到输出了, 试试怎么把最后的 "\nrepl " 和前后的引号去掉,应该用 substring 就成了

;; #emacs 上的 twb 指点了我一下, 最后用的 replace-regexp-in-string,

(insert (replace-regexp-in-string "\"\\(.+\\)\"\nrepl> " "\\1" wenshan-de-string))

;; 查一下怎么加东西进 kill-ring,, 修改函数如下:
(defun moz-controller-repl-filter (proc string)
  (when (buffer-live-p (process-buffer proc))
    (setq moz-controller-repl-string (replace-regexp-in-string "\"\\(.+\\)\"\nrepl> " "\\1" string))
    (message moz-controller-repl-string)
    (kill-new moz-controller-repl-string)
    (with-current-buffer (process-buffer proc)
      (let ((moving (= (point) (process-mark proc))))
          ;; Insert the text, advancing the process marker.
          (goto-char (process-mark proc))
          (insert string)
          (set-marker (process-mark proc) (point)))
        (if moving (goto-char (process-mark proc)))))))

(set-process-filter (get-buffer-process "*MozRepl*") 'moz-controller-repl-filter)

;;; 然后写个 moz-controller 函数,获取当前网址
(defun-moz-controller-command moz-controller-get-current-url ()
  "Get the current tab's URL and add to kill-ring"

最后把这些代码整合进 moz-controller,见 commit:

-1:-- Hack 记录:emacs-moz-controller 的创造过程 (Post Wenshan)--L0--C0--October 03, 2014 10:54 PM

Ben Simon: More Scheme Dev on Android, Now With Git Support

Continuing on my Scheme on Android kick, here's a solution to the most recent Programming Praxis problem. It's actually quite an interesting challenge:

It is generally accepted wisdom that people should use different passwords for each of their online accounts. Unfortunately, few people do that because of the difficulty of remembering so many passwords.

Manuel Blum — he’s the guy in the middle of the Blum Blum Shub sandwich — suggests an algorithm that hashes a web site name into a password. Further, his algorithm is cryptographically secure, so no one else can determine your password, and can be worked mentally, without a computer or even pencil and paper.

Read more about the algorithm here. I don't have the mental muscle to execute the hash in my head, but it was a terrific little coding challenge.

I used the same setup as last time: Gambit Scheme + DroidEdit + Perixx 805L Keyboard. For the most part, I found myself writing and executing Scheme with little regard for the fact that I was working on a cell phone. That, of course, is what I'm after.

Two improvements I've made in the setup:

First, I've defined a procedure named lex, which I can invoke to reload the source file I'm working on:

This combined with a couple of keyboard shortcuts to switch between Gambit and DroidEdit means that I can modify, reload and execute new code with relative ease. Having the source code live in ex1.scm means that the entire solution to the problem will be in one file. Which for the purposes of developing solutions to these brain teasers is ideal.

Second, I've setup git in Terminal IDE. So now, when I'm finished developing a solution, I can archive it. DroidEdit has built in support for Git, but I prefer a command line interface to version control.

Setting up Terminal IDE took just a bit of wrangling. Here's what I did:

  • Made sure the appropriate public key was in Github.
  • On a Linux machine that had the dropbear ssh tools installed, I converted my private key to be in the dropbear format:
    mkdir ~/tmp/working
    cp ~/.ssh/id_dsa ~/tmp/working
    cd ~/tmp/working
    ssh-keygen -p -f id_dsa # strip the encryption from the openssh key
    dropbearconvert openssh dropbear id_dsa id_dsa.db
    # Copy id_dsa.db to the Galaxy S5
    rm -rf ~/tmp/working
  • Checked the key on my phone by running the following within Terminal IDE:
    ssh -i id_dsa.db
  • Within Terminal IDE, I created a wrapper script named gitssh to launch ssh properly:
    ssh -i $HOME/id_dsa.db $*
  • Still within Terminal IDE, I set the GIT_SSH variable to the above script:
    setenv GIT_SSH=gitssh
  • I was then able to launch git commands as usual:
     git clone ssh://

Happy on the go hacking!

-1:-- More Scheme Dev on Android, Now With Git Support (Post Ben Simon)--L0--C0--September 29, 2014 08:13 AM

Grant Rettke: Perl regex builder in emacs

  • Perl regex builder in emacs
    • Read the wiki and there are 2 options
    • Best is this
    • Roughly confirmed that is the best and newest
    • Set up an el-get for it tonight
    • Played around for while can’t get el-get to load it
    • I can’t even load it myself
      • Probably my fault
    • Posted ticket on github page here
-1:-- Perl regex builder in emacs (Post Grant)--L0--C0--September 28, 2014 12:07 AM

Wilfred Hughes: The Definitive Guide To Syntax Highlighting

What do you expect your editor to highlight? What are the different ways that we can highlight code without calling external tools? Whilst most editors have converged on a common set of base functionality, there’s still innovation occurring in this field.

The limitation of highlighting tools is that you can’t use all of them at the same time. We’ll explore what’s available to help you choose.

I’m taking these examples from Emacs, but many of these are available on other editors too. We’ll limit ourselves to programming language highlighting that the editor itself can do, ignoring lint tools and VCS integrations.

Lexical Highlighting

js-mode in the standard Emacs theme

A programmer typically expects syntax highlighting to look like this. Different lexical categories – function names, keywords, comments and so on – are each shown in a different colour. Virtually all editors provide this, with the notable exception of Acme. In Emacs, this is largely done with font-lock-syntax-table, though font-lock-keywords is usually used too.

Simple lexical highlighting is already useful. Syntactic mistakes, such as typos in keywords or unclosed string or comments, become obvious.

A note on screenshots: The above image is the default colour scheme in Emacs. In other images I’ve customised the styling to only show the highlighting that’s related to the feature mentioned. The code samples aren’t particularly idiomatic or elegant, I’ve simply chosen them to show off relevant parts of the syntax.

It’s interesting to see that default Emacs colour scheme does not choose a washed out grey for comments, preferring to make comments prominent.

Extended Lexical Highlighting

Depending on your taste for ‘angry fruit salad’ highlighting, you might choose to distinguish more lexical classes. Emacs has font-lock-maximum-decoration to adjust how many distinct things are highlighted, but it’s rarely used in practice.

There are variety of minor modes that offer additional highlighting of specific syntactic features. What’s great about these minor-modes is that they compose nicely, allowing them to be reused for highlighting many different languages.


This is highlight-numbers. It’s a simple, non-intrusive extension to highlighting that makes sense in pretty much every language.


Fun fact: Vim has a Common Lisp highlighting mode that highlights more syntax classes that Emacs does! Here’s a screenshot. One great feature of Vim’s mode is highlighting quoted values. This is available with highlight-quoted. As pictured above, it highlights quotes, backticks, and quoted symbols.


highlight-escape-sequences is another minor mode with a simple goal. In this case, it highlights escape sequences inside strings. It currently only supports Ruby and JavaScript.


All these minor modes are matters of preference. If a major mode developer likes this extended highlighting, they tend to include it in their major mode anyway. In the above example, enh-ruby-mode highlights all infix operators (in addition the standard Ruby highlighting).

Semantic Highlighting

Some modes include a full parser rather than just a lexer. This enables more sophisticated highlighting techniques.


js2-mode is the best example of this. js2-mode includes a full-blown recursive-descent ECMAScript parser, plus a number of common JS extensions. This enables js2-mode to distinguish more syntax types. For example, it can distinguish parameters and global variables (pictured above).

This is an amazing achievement and even allows the editor to do many checks that are traditionally done by lint tools. Highlighting globals is particularly useful because use of a global is not necessarily an error, but it’s useful information about the code. js2-mode can also be configured to highlight globals specific to your current project or JS platform (see js2-additional-externs).

S-expression Highlighting

Emacs also offers a number of specialist highlighting modes for s-expressions.


paren-face is a simple minor mode that assigns an additional face to brackets, enabling you to style brackets separately. It’s intended to fade out the brackets, so you can focus on the rest of your code.


rainbow-delimeters takes the opposite approach. Each level of brackets is assigned a unique face, enabling you give each one a different colour. This works particularly well when using cond, as it’s easy to spot the different boolean expressions.

By default it allows nine levels of nesting before cycling colours (see rainbow-delimeters-max-face-count) but you will have to choose a tradeoff between more levels and contrast between the colours of the different levels. I settled for six levels that are very distinct (the defaults are rather subtle).


If you like rainbow-delimeters, rainbow-blocks applies the same technique, but colours everything according to the nesting depth. It’s fantastic for seeing nesting, but it does limit how much else you can highlight.


highlight-stages specifically targets quoted and quasi-quoted s-expressions. It enables the reader to easily spot unquoted parts of a quasi-quote, and is particularly useful if you’re nesting quasi-quotes.

Standard Library Highlighting

Another school of thought is that you should highlight all functions from the language standard or standard library. Xah Lee subscribes to this philosophy, and has released JS and elisp modes that provide this.


This is difficult to do in elisp as it’s a lisp-2, and this mode can confuse variables and functions slots (so list is highlighted even when used as a variable).


The default Python mode in Emacs takes a similar approach, highlighting the 80 built-in functions and some methods on built-in types. This is hard to do in general, and python-mode will incorrectly highlight similarly-named methods on other types or methods whose name matches built-in functions.

Docstring Highlighting

docstrings in elisp

Docstrings are conceptually between strings and comments: they’re for the reader (like comments), but they’re available at runtime (like strings). Emacs exposes separate faces for comments, strings and docstrings (font-lock-comment-face, font-lock-string-face and font-lock-doc-face respectively).

Elisp docstrings may also contain additional syntax for cross-references. Emacs will highlight these differently too (though their primary purpose is linking cross-references in *Help* buffers).

js2-mode with JSDoc

Some languages support elaborate syntax in their comments, both to help the reader and to aid automatic documentation tools. In this example, js2-mode offers additional highlighting of JSDoc comments.

Contextual Highlighting

Another important area of highlighting is to highlight elements based on where the cursor (‘point’ in Emacs terminology) is currently located.


The most basic contextual highlighting is showing the matching bracket to the bracket currently at the cursor. This is part of Emacs, but off by default: show-paren-mode will switch it on.


Highlighting the current line is a very common feature of editor highlighting, and Emacs provides hl-line-mode for this. This works well for line-oriented programming languages.


When dealing with s-expressions, you can take this a step further with hl-sexp. This shows the entire s-expression under point, avoiding confusion when editing deeply nested expressions.


highlight-parentheses takes a more subtle approach. It highlights the current bracket as ‘hot’, and highlight outer brackets in progressively ‘cooler’ colours.


The last example in this section is the superb highlight-symbol. This is invaluable for showing you where else the current symbol is being used. highlight-symbol is conservative and only does when the point isn’t moving, but set highlight-symbol-idle-delay to 0 to override this.

highlight-symbol-mode is particularly clever in that it’s able to inspect the current syntax table. This prevents it from becoming confused with strings like x-1, which is usually a single symbol in lisps, but equivalent to x - 1 in many other languages.

Explicit Highlighting


There comes a point where automatic highlighting isn’t sufficient, and you want to explicitly highlight something. Emacs provides hi-lock-mode for this, and supports a special comment syntax that allows other readers to see the same highlighting.


It’s also possible to configure Emacs to change how it display the text itself.


There are several modes in Emacs for substituting strings like lambda or <= with their mathematical counterparts. Emacs 24.4 will also include a prettify-symbols-mode that provides this.

This works very well when editing LaTeX documents, but can be tricky with code. In cases like lambda you’re replacing with a shorter string, which means you end up indenting differently depending on whether you have substitutions switched on.


glasses-mode is a fun minor mode for users who don’t like CamelCase. It displays camel case symbols with underscores, so FooBar becomes Foo_Bar, without changing the underlying text.

Hashed Highlighting


One novel approach to highlighting code is to give each symbol a different colour. You simply hash each string and assign a colour accordingly. This means that variables with similar spellings get completely different colours.

This was popularised recently by an article by Evan Brooks and color-identifiers-mode was released as a result (pictured above). KDevelop has had this feature for some time, calling it ‘semantic highlighting’. IRC clients often use this technique for nickname highlighting.

Whilst powerful, it’s tricky to get right. Too few colours, and different symbols end up the same colour. Too many colours, and it’s hard to visually distinguish some pairs of colours. In the above image, url and encodeURIComponent are quite similar. This small code snippet does not really take full advantage of hashed-based highlighting: it’s most effective when you have a larger piece of code with more distinct symbols.

Introspective Highlighting


Finally, self-hosting environments, such as Emacs or Smalltalk, can offer additional highlighting possibilities. highlight-defined enables you to highlight functions, variables or macros that are currently defined.

This works well for spotting typos in variable names, but it’s a little more sophisticated. In the above image, we can see that fibonacci has been evaluated, so the recursive calls are highlighted. We can even see whether we’ve forgotten to evaluate any library imports!

Pharo Workspace highlighting selectors

Pharo (a Smalltalk implementation) is also able to do this. The methods of classes (called ‘selectors’) may be changed at any point, but the environment can introspect to see if the current selector is appropriate for the value it is being called on.

This is quite different from the traditional Java-style IDE integration, as it’s based on runtime information in the current process, instead of static analysis.

In practice however, many of the benefits of introspective highlighting are provided by calling an external language-specific lint tool from the editor.


It’s really hard to compose syntax highlighting tools. Some of the examples here are very intrusive (particularly rainbow-blocks and color-identifiers-mode), preventing you from using them in addition to other tools. The contextual highlighting tools are the best in this regard.

There’s a lot of information that could be displayed by the editor, but relatively little can be shown at once. The primary options for highlighting are only text colour, background colour, weight, lines (underline, overline, strikethrough) and fringes (colours shown at the left edge of the editor window).

If you’re writing a highlighting tool in Emacs, try to define your own faces wherever possible. For example, highlight-stages doesn’t provide a face, so it can only highlight quasi-quotes by changing the background colour. If you’re already using the background colour to highlight something else, you cannot make highlight-stages use underlines instead. I had similar problems with modes that dynamically define faces, as you can’t customise them in the normal way.

When you release a highlighting tool, please include screenshots. It’s amazing how many tools that I’ve listed have no screenshots on their GitHub pages.

Personally, I like angry fruit salad. Lots of contrasting colours for different lexical classes, plus tons of contextual highlighting, is the sweet spot for me. Experiment, and see what suits you.

-1:-- The Definitive Guide To Syntax Highlighting (Post Wilfred Hughes ( 27, 2014 12:00 AM

Raimon Grau: Back to the trenches with... speadsheets?

I'm back to reality after my craziest holidays ever. Great new people, and fun with old friends...

And I'm back with some research on spreadsheets.  My new role as team leader involves some team organization and managing skills.  Due to my inability to organize myself, I'm trying to solve this once for all, and it's also a good oportunity to take another look at org-mode.

I've been dabbling with org-agenda and appointments. The git repo has already some '.org' files :).

So today I was watching a few talks from the StrangeLoop 2014, and I stumbled upon this 'Spreadsheets for developers' talk.  The talk has its points. I don't agree with everything that is said there but it does confirm the overally idea I already had of spreadsheets.  Everyone uses them to solve their own problems. It's the emacs of non-developers.

So I remembered org-spreadsheet, and hacked a bit with it.  It's really impressive, and, although a bit cumbersome in the beginning, I think it has lots of potential. And if spreadsheets are code, org-spreadsheet is more code than excel

There are many many different shortcuts but you only need a few to get started.

shortcutwhat it does
c-}toggles column/row legend
c-'global formula editor
c-`cell editor (formulas)
c-c c-c (on #tblfm)updates table recalculating
s-RETinserts previous number +1

I really encourage you to read the manual and also the tutorial, where you'll find out how to write formulas with Calc or elisp.

Also a nice trick is the org-table-to-lisp function, that will parse the table the cursor is on and give you a list of lists with the data.
-1:-- Back to the trenches with... speadsheets? (Post Raimon Grau ( 25, 2014 02:19 AM

Ben Simon: Improving Terminal IDE: Adding emacs and command line copy & paste support

Terminal IDE rocks. For those who have yet to install it, it provides a bash command prompt from within Android. This isn't just an academic exercise, bash often provides the most efficient way to manage files (oh, the joy of mv, cp and tar), work with networks (curl and netcat baby!) and generally hack away with your device.

As good as Terminal IDE is, here's two ways to enhance it:

Add emacs

Sure, emacs runs on Android, but it has a nasty habit of segfaulting. I found that if I cranked down my font size it would run. Great, I could either run emacs and not see it, or see it and not have it run. David Meggison provides an brilliant fix: run emacs under Terminal IDE. He provides instructions for doing so. Though, I think they can be abbreviated to:

  • Install the emacs 'app' from Google Play
  • Open up Terminal IDE
  • Copy the emacs executable from the sdcard/emacs directory into your home directory in Terminal IDE (bonus points if you copy it into a local bin directory)
  • Invoke emacs
  • Sit back and be amazed

I've noticed a number of redraw issues, but nothing like the crashes I was seeing using the emacs app itself. It's definitely usable.

Command line copy and paste

Terminal IDE is great and all, but it often feels a bit disconnected from the rest of the system. If I've got some output from curl I want to e-mail, it takes jumping through hoops to get this done. What I really wanted was a quick and easy way to get content into and out of Terminal IDE. The Android clipboard seemed the ideal path. But how to do this?

My solution was to use Tasker (well, duh, right?). I developed two new Tasker profiles:

These are both pretty dang simple. Clipboard: Get watches the magic variable %CLIP (which contains the clipboard contents). When it changes, it automatically pushes the changes to a file living at: Tasker/clipboard/get.txt. Similarly, Clipboard: Set watches for changes to Tasker/clipboard/set.txt, as soon as this file updates, the contents of are stored on the clipboard.

With these Tasker profiles running, I can now set and get the clipboard contents via plain old text files. Which of course, bash loves. From the Terminal IDE side of things, I wrote the following shell script:


## Work with the system clipboard. Or, at least pretend to.
## Really, this is all powered by Tasker.


if [ "$1" = "-s" ] ; then
    if [ -z "$1" ] ; then
 cat > $CLIP_DIR/set.txt
 echo "$@" > $CLIP_DIR/set.txt
elif [ "$1" = "-g" ] ; then
    cat $CLIP_DIR/get.txt
    echo "Usage: `basename $0` {-s|-g} [text to copy]"

With this in place, I can now do:

 # setting the clipboard
 curl -i '' | clip -s   

 # get the clipboard
 clip -g | wc -l

Now, sending off the output of curl is as simple as capturing the output, switching over to the Gmail app, and pasting the results.

Next up: mixing these two solutions. I'm sure it's possible to convince emacs to use these files (get.txt and set.txt) in its kill ring operations.

-1:-- Improving Terminal IDE: Adding emacs and command line copy &amp; paste support (Post Ben Simon ( 23, 2014 05:56 PM

Tom Tromey: Emacs Modules

I’ve been working on an odd Emacs package recently — not ready for release — which has turned into more than the usual morass of prefixed names and double hyphens.

So, I took another look at Nic Ferrier’s namespace proposal.

Suddenly it didn’t seem all that hard to implement something along these lines, and after a bit of poking around I wrote emacs-module.

The basic idea is to continue to follow the Emacs approach of prefixing symbol names — but not to require you to actually write out the full names of everything.  Instead, the module system intercepts load and friends to rewrite symbol names as lisp is loaded.

The symbol renaming is done in a simple way, following existing Emacs conventions.  This gives the nice result that existing code doesn’t need to be updated to use the module system directly.  That is, the module system recognizes name prefixes as “implicit” modules, based purely on the module name.

I’d say this is still a proof-of-concept.  I haven’t tried hairier cases, like defclass, and at least declare-function does not work but should.

Here’s the example from the docs:

(define-module testmodule :export (somevar))
(defvar somevar nil)
(defvar private nil)
(provide 'testmodule)

This defines the public variable testmodule-somevar and the “private” function testmodule--private.

-1:-- Emacs Modules (Post tom)--L0--C0--September 20, 2014 05:08 AM

Chen Bin (redguardtoo): My Emacs skill is improved after 3 years

This is my note on useful commands/keybindings to memorize three years ago.

Most are obsolete now because I'm more skillful now.

Basically I use fewer but more powerful plugins. I write more Elisp code if there is no suitable plugin.

  • Three years ago, column edit,
C-x r t yourstring RET (See "How to do select column then do editing in GNU Emacs ?"")

Now I use Evil

  • Three years ago, save current position to register and jump to the position,
C-r SPC to save, C-j to jump (better-registers.el required) 

Now I use Evil.

  • Three years ago, save frame configuration to register,
C-r f (better-registers.el required) 

Now I use workgroups2.

  • Three year ago, (un)comment line,
M-; (qiang-comment-dwim-line required)

Now I use evil-nerd-commenter.

  • Three years ago for visiting the next/previous error message after compiling,
M-g M-n/M-p

I'm still using it now.

  • Three years ago, find-tag/pop-tag-mark

I use Evil's hot key.

  • Three years ago, grep current work directory only or all sub-directories
M-x lgrep/rgrep

Now I use grep in bash plus percol

  • Three years ago, visit multiple tags table
M-x visit-tags-table

Now I find hack tags-table-list directly might be simpler.

  • Three years ago, set countdown timer
M-x org-timer-set-timer, C-c C-x ;

Now I don't push myself by timer

  • Three years ago, mark subtree in org-mode
M-x org-mark-subtree

It was used to select the text to post to my wordpress blog.

Now I use org2nikola to post to my blog. The org-mark-subtree is hardcoded org2nikola.

-1:-- My Emacs skill is improved after 3 years (Post)--L0--C0--September 17, 2014 01:30 PM

Chen Bin (redguardtoo): My Emacs skill is improved after 3 years

This is my note on useful commands/keybindings to memorize three years ago.

Most are obsolete now because I'm more skillful now.

Basically I use fewer but more powerful plugins. I write more Elisp code if there is no suitable plugin.

  • Three years ago, column edit,
C-x r t yourstring RET (See "How to do select column then do editing in GNU Emacs ?"")

Now I use Evil

  • Three years ago, save current position to register and jump to the position,
C-r SPC to save, C-j to jump (better-registers.el required) 

Now I use Evil.

  • Three years ago, save frame configuration to register,
C-r f (better-registers.el required) 

Now I use workgroups2.

  • Three year ago, (un)comment line,
M-; (qiang-comment-dwim-line required)

Now I use evil-nerd-commenter.

  • Three years ago for visiting the next/previous error message after compiling,
M-g M-n/M-p

I'm still using it now.

  • Three years ago, find-tag/pop-tag-mark

I use Evil's hot key.

  • Three years ago, grep current work directory only or all sub-directories
M-x lgrep/rgrep

Now I use grep in bash plus percol

  • Three years ago, visit multiple tags table
M-x visit-tags-table

Now I find hack tags-table-list directly might be simpler.

  • Three years ago, set countdown timer
M-x org-timer-set-timer, C-c C-x ;

Now I don't push myself by timer

  • Three years ago, mark subtree in org-mode
M-x org-mark-subtree

It was used to select the text to post to my wordpress blog.

Now I use org2nikola to post to my blog. The org-mark-subtree is hardcoded org2nikola.

-1:-- My Emacs skill is improved after 3 years (Post)--L0--C0--September 17, 2014 01:30 PM

emacspeak: Emacspeak At Twenty: Looking Back, Looking Forward

Emacspeak At Twenty: Looking Back, Looking Forward

1 Introduction

One afternoon in the third week of September 1994, I started writing myself a small Emacs extension using Lisp Advice to make Emacs speak to me so I could use a Linux laptop. As Emacspeak turns twenty, this article is both a quick look back over the twenty years of lessons learned, as well as a glimpse into what might be possible as we evolve to a world of connected, ubiquitous computing. This article draws on Learning To Program In 10 Years by Peter Norvig for some of its inspiration.

2 Using UNIX With Speech Output — 1994

As a graduate student at Cornell, I accessed my Unix workstation (SunOS) from an Intel 486 PC running IBM Screen-Reader. There was no means of directly using a UNIX box at the time; after graduating, I continued doing the same for about six months at Digital Research in Cambridge — the only difference being that my desktop workstation was now a DEC-Alpha. Throughout this time, Emacs was my environment of choice for everything from software development and Internet access to writing documents.

In fall of 1994, I wanted to start using a laptop running Linux; a colleague (Dave Wecker) was retiring his 386mhz laptop that already had Linux on it and I decided to inherit it. But there was only one problem — until then I had always accessed a UNIX machine from a secondary PC running a screen-reader — something that would clearly make no sense with a laptop!

Another colleague, Win Treese, had pointed out the interesting possibilities presented by package advice in Emacs 19.23 — a few weeks earlier, he had sent around a small snippet of code that magically modified Emacs' version-control primitive to first create an RCS directory if none existed before adding a file to version control. When I speculated about using the Linux laptop, Dave remarked — you live in Emacs anyway — why dont you just make it talk!

Connecting the dots, I decided to write myself a tool that augmented Emacs' default behavior to speak — within about 4 hours, version 0.01 of Emacspeak was up and running.

3 Key Enabler — Emacs And Lisp Advice

It took me a couple of weeks to fully recognize the potential of what I had built with Emacs Lisp Advice. Until then, I had used screen-readers to listen to the contents of the visual display — but Lisp Advice let me do a lot more — it enabled Emacspeak to generate highly context-specific spoken feedback, augmented by a set of auditory icons. I later formalized this design under the name speech-enabled applications. For a detailed overview of the architecture of Emacspeak, see the chapter on Emacspeak in the book Beautiful Code from O'Reilly.

4 Key Component — Text To Speech (TTS)

Emacspeak is a speech-subsystem for Emacs; it depends on an external Text-To-Speech (TTS) engine to produce speech. In 1994, Digital Equipment released what would turn out to be the last in the line of hardware DECTalk synthesizers, the DECTalk Express. This was essentially an Intel 386with 1mb of flash memory that ran a version of the DECTalk TTS software — to date, it still remains my favorite Text-To-Speech engine. At the time, I also had a software version of the same engine running on my DEC-Alpha workstation; the desire to use either a software or hardware solution to produce speech output defined the Emacspeak speech-server architecture.

I went to IBM Research in 1999; this coincided with IBM releasing a version of the Eloquennce TTS engine on Linux under the name ViaVoice Outloud. My colleague Jeffrey Sorenson implemented an early version of the Emacspeak speech-server for this engine using the OSS API; I later updated it to use the ALSA library while on a flight back to SFO from Boston in 2001. That is still the TTS engine that is speaking as I type this article on my laptop.

20 years on, TTS continues to be the weakest link on Linux; the best available solution in terms of quality continues to be the Linux port of Eloquence TTS available from Voxin in Europe for a small price. Looking back across 20 years, the state of TTS on Linux in particular and across all platforms in general continues to be a disappointment; most of today's newer TTS engines are geared toward mainstream use-cases where naturalness of the voice tends to supersede intelligibility at higher speech-rates. Ironically, modern TTS engines also give applications far less control over the generated output — as a case in point, I implemented Audio System For Technical Readings (AsTeR) in 1994 using the DECTalk; 20 years later, we implemented MathML support in ChromeVox using Google TTS. In 2013, it turned out to be difficult or impossible to implement the type of audio renderings that were possible with the admittedly less-natural sounding DECTalk!

5 Emacspeak And Software Development

Version 0.01 of Emacspeak was written using IBM Screen-Reader on a PC with a terminal emulator accessing a UNIX workstation. But in about 2 weeks, Emacspeak was already a better environment for developing Emacspeak in particular and software development in general. Here are a few highlights in 1994 that made Emacspeak a good software development environment, present-day users of Emacspeak will see that that was just scratching the surface.

  • Audio formatting using voice-lock to provide aural syntax highlighting.
  • Succinct auditory icons to provide efficient feedback.
  • Emacs' ability to navigate code structurally —

as opposed to moving around by plain-text units such as characters, lines and words. S-Expressions are a major win!

  • Emacs' ability to specialize behavior based on major and minor modes.
  • Ability to browse program code using tags, and getting fluent spoken feedback.
  • Completion everywhere.
  • Everything is searchable — this is a huge win when you cannot see the screen.
  • Interactive spell-checking using ISpell with continuous spoken feedback augmented by aural highlights.
  • Running code compilation and being able to jump to errors with spoken feedback.
  • Ability to move through diff chunks when working with source code and source control systems; refined diffs as provided by the ediff package when speech-enabled is a major productivity win.
  • Ability to easily move between email, document authoring and programming — though this may appear trivial, it continues to be one of Emacs' biggest wins.

Long-term Emacs users will recognize all of the above as being among the reasons why they do most things inside Emacs — there is little that is Emacspeak specific in the above list — except that Emacspeak was able to provide fluent, well-integrated contextual feedback for all of these tasks. And that was a game-changer given what I had had before Emacspeak. As a case in point, I did not dare program in Python before I speech-enabled Emacs' Python-Mode; the fact that white space is significant in Python made it difficult to program using a plain screen-reader that was unaware of the semantics of the underlying content being accessed.

5.1 Programming Defensively

As an aside, note that all of Emacspeak has been developed over the last 20 years with Emacspeak being the only adaptive technology on my system. This has led to some interesting design consequences, primary among them being a strong education in programming defensively. Here are some other key features of the Emacspeak code-base:

  1. The code-base is extremely bushy rather than deeply hierarchical — this means that when a module breaks, it does not affect the rest of the system.
  2. Separation of concerns with respect to the various layers, a tightly knit core speech library interfaces with any one of many speech servers running as an external process.
  3. Audio formatting is abstracted by using the formalism defined in Aural CSS.
  4. Emacspeak integrates with Emacs' user interface conventions by taking over a single prefix key C-e with all Emacspeak commands accessed through that single keymap. This helps embedding Emacspeak functionality into a large variety of third party modules without any loss of functionality.

6 Emacspeak And Authoring Documents

In 1994, my preferred environment for authoring all documents was LaTeX using the Auctex package. Later I started writing either LaTeX or HTML using the appropriate support modes; today I use org-mode to do most of my content authoring. Personally, I have never been a fan of What You See Is What You Get (WYSIWYG) authoring tools — in my experience that places an undue burden on the author by drawing attention away from the content to focus on the final appearance. An added benefit of creating content in Emacs in the form of light-weight markup is that the content is long-lived — I can still usefully process and re-use things I have written 25 years ago.

Emacs, with Emacspeak providing audio formatting and context-specific feedback remains my environment of choice for writing all forms of content ranging from simple email messages to polished documents for print publishing. And it is worth repeating that I never need to focus on what the content is going to look like — that job is best left to the computer.

As an example of producing high-fidelity visual content, see this write-up on Polyhedral Geometry that I published in 2000; all of the content, including the drawings were created by me using Emacs.

7 Emacspeak And The Early Days Of The Web

Right around the time that I was writing version 0.01 of emacspeak, a far more significant software movement was under way — the World Wide Web was moving from the realms of academia to the mainstream world with the launch of NCSA Mosaic — and in late 1994 by the first commercial Web browser in Netscape Navigator. Emacs had always enabled integrated access to FTP archives via package ange-ftp; in late 1993, William Perry released Emacs-W3, a Web browser for Emacs written entirely in Emacs Lisp. W3 was one of the first large packages to be speech-enabled by Emacspeak — later it was the browser on which I implemented the first draft of the Aural CSS specification. Emacs-W3 enabled many early innovations in the context of providing non-visual access to Web content, including audio formatting and structured content navigation; in summer of 1995, Dave Raggett and I outlined a few extensions to HTML Forms, including the label element as a means of associating metadata with interactive form controls in HTML, and many of these ideas were prototyped in Emacs-W3 at the time. Over the years, Emacs-W3 fell behind the times — especially as the Web moved away from cleanly structured HTML to a massive soup of unmatched tags. This made parsing and error-correcting badly-formed HTML markup expensive to do in Emacs-Lisp — and performance suffered. To add to this, mainstream users moved away because Emacs' rendering engine at the time was not rich enough to provide the type of visual renderings that users had come to expect. The advent of DHTML, and JavaScript based Web Applications finally killed off Emacs-W3 as far as most Emacs users were concerned.

But Emacs-W3 went through a revival on the emacspeak audio desktop in late 1999 with the arrival of XSLT, and Daniel Veillard's excellent implementation via the libxml2 and libxslt packages. With these in hand, Emacspeak was able to hand-off the bulk of HTML error correction to the xsltproc tool. The lack of visual fidelity didn't matter much for an eyes-free environment; so Emacs-W3 continued to be a useful tool for consuming large amounts of Web content that did not require JavaScript support.

During the last 24 months, libxml2 has been built into Emacs; this means that you can now parse arbitrary HTML as found in the wild without incurring a performance hit. This functionality was leveraged first by package shr (Simple HTML Renderer) within the gnus package for rendering HTML email. Later, the author of gnus and shr created a new light-weight HTML viewer called eww that is now part of Emacs 24. With improved support for variable pitch fonts and image embedding, Emacs is once again able to provide visual renderings for a large proportion of text-heavy Web content where it becomes useful for mainstream Emacs users to view at least some Web content within Emacs; during the last year, I have added support within emacspeak to extend package eww with support for DOM filtering and quick content navigation.

8 Audio Formatting — Generalizing Aural CSS

A key idea in Audio System For Technical Readings (AsTeR) was the use of various voice properties in combination with non-speech auditory icons to create rich aural renderings. When I implemented Emacspeak, I brought over the notion of audio formatting to all buffers in Emacs by creating a voice-lock module that paralleled Emacs' font-lock module. The visual medium is far richer in terms of available fonts and colors as compared to voice parameters available on TTS engines — consequently, it did not make sense to directly map Emacs' face properties to voice parameters. To aid in projecting visual formatting onto auditory space, I created property personality analogous to Emacs' face property that could be applied to content displayed in Emacs; module voice-lock applied that property appropriately, and the Emacspeak core handled the details of mapping personality values to the underlying TTS engine.

The values used in property personality were abstract, i.e., they were independent of any given speech engine. Later in the fall of 1995, I re-expressed these set of abstract voice properties in terms of Aural CSS; the work was published as a first draft toward the end of 1995, and implemented in Emacs-W3 in early 1996. Aural CSS was an appendix in the CSS-1.0 specification; later, it graduated to being its own module within CSS-2.0.

Later in 1996, all of Emacs' voice-lock functionality was re-implemented in terms of Aural CSS; the implementation has stood the test of time in that as I added support for more TTS engines, I was able to implement engine-specific mappings of Aural-CSS values. This meant that the rest of Emacspeak could define various types of voices for use in specific contexts without having to worry about individual TTS engines. Conceptually, property personality can be thought of as holding an aural display list — various parts of the system can annotate pieces of text with relevant properties that finally get rendered in the aggregate. This model also works well with the notion of Emacs overlays where a moving overlay is used to temporarily highlight text that has other context-specific properties applied to it.

Audio formatting as implemented in Emacspeak is extremely effective when working with all types of content ranging from richly structured mark-up documents (LaTeX, org-mode) and formatted Web pages to program source code. Perceptually, switching to audio formatted output feels like switching from a black-and-white monitor to a rich color display. Today, Emacspeak's audio formatted output is the only way I can correctly write else if vs elsif in various programming languages!

9 Conversational Gestures For The Audio Desktop

By 1996, Emacspeak was the only piece of adaptive technology I used; in fall of 1995, I had moved to Adobe Systems from DEC Research to focus on enhancing the Portable Document Format (PDF) to make PDF content repurposable. Between 1996 and 1998, I was primarily focused on electronic document formats — I took this opportunity to step back and evaluate what I had built as an auditory interface within Emacspeak. This retrospect proved extremely useful in gaining a sense of perspective and led to formalizing the high-level concept of Conversational Gestures and structured browsing/searching as a means of thinking about user interfaces.

By now, Emacspeak was a complete environment — I formalized what it provided under the moniker Complete Audio Desktop. The fully integrated user experience allowed me to move forward with respect to defining interaction models that were highly optimized to eyes-free interaction — as an example, see how Emacspeak interfaces with modes like dired (Directory Editor) for browsing and manipulating the filesystem, or proced (Process Editor) for browsing and manipulating running processes. Emacs' integration with ispell for spell checking, as well as its various completion facilities ranging from minibuffer completion to other forms of dynamic completion while typing text provided more opportunities for creating innovative forms of eyes-free interaction. With respect to what had gone before (and is still par for the course as far as traditional screen-readers are concerned), these types of highly dynamic interfaces present a challenge. For example, consider handling a completion interface using a screen-reader that is speaking the visual display. There is a significant challenge in deciding what to speak e.g., when presented with a list of completions, the currently typed text, and the default completion, which of these should you speak, and in what order? The problem gets harder when you consider that the underlying semantics of these items is generally not available from examining the visual presentation in a consistent manner. By having direct access to the underlying information being presented, Emacspeak had a leg up with respect to addressing the higher-level question — when you do have access to this information, how do you present it effectively in an eyes-free environment? For this and many other cases of dynamic interaction, a combination of audio formatting, auditory icons, and the ability to synthesize succinct messages from a combination of information items — rather than having to forcibly speak each item as it is rendered visually provided for highly efficient eyes-free interaction.

This was also when I stepped back to build out Emacspeak's table browsing facilities — see the online Emacspeak documentation for details on Emacspeak's table browsing functionality which continues to remain one of the richest collection of end-user affordances for working with two-dimensional data.

9.1 Speech-Enabling Interactive Games

So in 1997, I went the next step in asking — given access to the underlying infromation, is it possible to build effective eyes-free interaction to highly interactive tasks? I picked Tetris as a means of exploring this space, the result was an Emacspeak extension to speech-enable module tetris.el. The details of what was learned were published as a paper in Assets 98, and expanded as a chapter on Conversational Gestures in my book on Auditory Interfaces; that book was in a sense a culmination of stepping back and gaining a sense of perspective of what I had build during this period. The work on Conversational Gestures also helped in formalizing the abstract user interface layer that formed part of the XForms work at the W3C.

Speech-enabling games for effective eyes-free interaction has proven highly educational. Interactive games are typically built to challenge the user, and if the eyes-free interface is inefficient, you just wont play the game — contrast this with a task that you must perform, where you're likely to make do with a sub-optimal interface. Over the years, Emacspeak has come to include eyes-free interfaces to several games including Tetris, Sudoku, and of late the popular 2048 game. Each of these have in turn contributed to enhancing the interaction model in Emacspeak, and those innovations typically make their way to the rest of the environment.

10 Accessing Media Streams

Streaming real-time audio on the Internet became a reality with the advent of RealAudio in 1995; soon there were a large number of media streams available on the Internet ranging from music streams to live radio stations. But there was an interesting twist — for the most part, all of these media streams expected one to look at the screen, even though the primary content was purely audio (streaming video hadn't arrived yet!). Starting in 1996, Emacspeak started including a variety of eyes-free front-ends for accessing media streams. Initially, this was achieved by building a wrapper around trplayer — a headless version of RealPlayer; later I built Emacspeak module emacspeak-m-player for interfacing with package mplayer. A key aspect of streaming media integration in emacspeak is that one can launch and control streams without ever switching away from one's primary task; thus, you can continue to type email or edit code while seamlessly launching and controlling media streams. Over the years, Emacspeak has come to integrate with Emacs packages like emms as well as providing wrappers for mplayer and alsaplayer — collectively, these let you efficiently launch all types of media streams, including streaming video, without having to explicitly switch context.

In the mid-90's, Emacspeak started including a directory of media links to some of the more popular radio stations — primarily as a means of helping users getting started — Emacs' ability to rapidly complete directory and file-names turned out to be the most effective means of quickly launching everything from streaming radio stations to audio books. And even better — as the Emacs community develops better and smarter ways of navigating the filesystem using completions, e.g., package ido, these types of actions become even more efficient!

11 EBooks— Ubiquitous Access To Books

AsTeR — was motivated by the increasing availability of technical material as online electronic documents. While AsTeR processed the TeX family of markup languages, more general ebooks came in a wide range of formats, ranging from plain text generated from various underlying file formats to structured EBooks, with Project Gutenberg leading the way. During the mid-90's, I had access to a wide range of electronic materials from sources such as O'Reilly Publishing and various electronic journals — The Perl Journal (TPJ) is one that I still remember fondly.

Emacspeak provided fairly light-weight but efficient access to all of the electronic books I had on my local disk — Emacs' strengths with respect to browsing textual documents meant that I needed to build little that was specific to Emacspeak. The late 90's saw the arival of Daisy as an XML-based format for accessible electronic books. The last decade has seen the rapid convergence to epub as a distribution format of choice for electronic books. Emacspeak provides interaction modes that make organizing, searching and reading these materials on the Emacspeak Audio Desktop a pleasant experience. Emacspeak also provides an OCR-Mode — this enables one to call out to an external OCR program and read the content efficiently.

The somewhat informal process used by publishers like O'Reilly to make technical material available to users with print impairments was later formalized by BookShare — today, qualified users can obtain a large number of books and periodicals initially as Daisy-3 and increasingly as EPub. BookShare provides a RESTful API for searching and downloading books; Emacspeak module emacspeak-bookshare implements this API to create a client for browsing the BookShare library, downloading and organizing books locally, and an integrated ebook reading mode to round off the experience.

A useful complement to this suite of tools is the Calibre package for organizing ones ebook collection; Emacspeak now implements an EPub Interaction mode that leverages Calibre (actually sqlite3) to search and browse books, along with an integrated EPub mode for reading books.

12 Leveraging Computational Tools — From SQL And R To IPython Notebooks

The ability to invoke external processes and interface with them via a simple read-eval-loop (REPL) is perhaps one of Emacs' strongest extension points. This means that a wide variety of computational tools become immediately available for embedding within the Emacs environment — a facility that has been widely exploited by the Emacs community. Over the years, Emacspeak has leveraged many of these facilities to provide a well-integrated auditory interface.

Starting from a tight code, eval, test form of iterative programming as encouraged by Lisp. Applied to languages like Python and Ruby to explorative computational tools such as R for data analysis and SQL for database interaction, the Emacspeak Audio Desktop has come to encompass a collection of rich computational tools that provide an efficient eyes-free experience.

In this context, module ein — Emacs IPython Notebooks — provides another excellent example of an Emacs tool that helps interface seamlessly with others in the technical domain. IPython Notebooks provide an easy means of reaching a large audience when publishing technical material with interactive computational content; module ein brings the power and convenience of Emacs ' editting facilities when developing the content. Speech-enabling package ein is a major win since editting program source code in an eyes-free environment is far smoother in Emacs than in a browser-based editor.

13 Social Web — EMail, Instant Messaging, Blogging And Tweeting Using Open Protocols

The ability to process large amounts of email and electronic news has always been a feature of Emacs. I started using package vm for email in 1990, along with gnus for Usenet access many years before developing Emacspeak. So these were the first major packages that Emacspeak speech-enabled. Being able to access the underlying data structures used to visually render email messages and Usenet articles enabled Emacspeak to produce rich, succinct auditory output — this vastly increased my ability to consume and organize large amounts of information. Toward the turn of the century, instant messaging arrived in the mainstream — package tnt provided an Emacs implementation of a chat client that could communicate with users on the then popular AOL Instant Messenger platform. At the time, I worked at IBM Research, and inspired by package tnt, I created an Emacs client called ChatterBox using the Lotus Sametime API — this enabled me to communicate with colleagues at work from the comfort of Emacs. Packages like vm, gnus, tnt and ChatterBox provide an interesting example of how availability of a clean underlying API to a specific service or content stream can encourage the creation of efficient (and different) user interfaces. The touchstone of such successful implementations is a simple test — can the user of a specific interface tell if the person whom he is communicating with is also using the same interface? In each of the examples enumerated above, a user at one end of the communication chain cannot tell, and in fact shouldn't be able to tell what client the user at the other end is using. Contrast this with closed services that have an inherent lock-in model e.g., proprietary word processors that use undocumented serialization formats — for a fun read, see this write-up on Universe Of Fancy Colored Paper.

Today, my personal choice for instant messaging is the open Jabber platform. I connect to Jabber via Emacs package emacs-jabber and with Emacspeak providing a light-weight wrapper for generating the eyes-free interface, I can communicate seamlessly with colleagues and friends around the world.

As the Web evolved to encompass ever-increasing swathes of communication functionality that had already been available on the Internet, we saw the world move from Usenet groups to Blogs — I remember initially dismissing the blogging phenomenon as just a re-invention of Usenet in the early days. However, mainstream users flocked to Blogging, and I later realized that blogging as a publishing platform brought along interesting features that made communicating and publishing information much easier. In 2005, I joined Google; during the winter holidays that year, I implemented a light-weight client for Blogger that became the start of Emacs package g-client — this package provides Emacs wrappers for Google services that provide a RESTful API.

14 The RESTful Web — Web Wizards And URL Templates For Faster Access

Today, the Web, based on URLs and HTTP-style protocols is widely recognized as a platform in its own right. This platform emerged over time — to me, Web APIs arrived in the late 90's when I observed the following with respect to my own behavior on many popular sites:

  1. I opened a Web page that took a while to load (remember, I was still using Emacs-W3),
  2. I then searched through the page to find a form-field that I filled out, e.g., start and end destinations on Yahoo Maps,
  3. I hit submit, and once again waited for a heavy-weight HTML page to load,
  4. And finally, I hunted through the rendered content to find what I was looking for.

This pattern repeated across a wide-range of interactive Web sites ranging from AltaVista for search (this was pre-Google), Yahoo Maps for directions, and Amazon for product searches to name but a few. So I decided to automate away the pain by creating Emacspeak module emacspeak-websearch that did the following:

  1. Prompt via the minibuffer for the requisite fields,
  2. Consed up an HTTP GET URL,
  3. Retrieved this URL,
  4. And filtered out the specific portion of the HTML DOM that held the generated response.

Notice that the above implementation hard-wires the CGI parameter names used by a given Web application into the code implemented in module emacspeak-websearch. REST as a design pattern had not yet been recognized, leave alone formalized, and module emacspeak-websearch was initially decryed as being fragile.

However, over time, the CGI parameter names remained fixed — the only things that have required updating in the Emacspeak code-base are the content filtering rules that extract the response — for popular services, this has averaged about one to two times a year.

I later codified these filtering rules in terms of XPath, and also integrated XSLT-based pre-processing of incoming HTML content before it got handed off to Emacs-W3 — and yes, Emacs/Advice once again came in handy with respect to injecting XSLT pre-processing into Emacs-W3!

Later, in early 2000, I created companion module emacspeak-url-templates — partially inspired by Emacs' webjump module. URL templates in Emacspeak leveraged the recognized REST interaction pattern to provide a large collection of Web widgets that could be quickly invoked to provide rapid access to the right pieces of information on the Web.

The final icing on the cake was the arrival of RSS and Atom feeds and the consequent deep-linking into content-rich sites — this meant that Emacspeak could provide audio renderings of useful content without having to deal with complex visual navigation! While Google Reader existed, Emacspeak provided a light-weight greader client for managing ones feed subscriptions; with the demise of Google Reader, I implemented module emacspeak-feeds for organizing feeds on the Emacspeak desktop. A companion package emacspeak-webspace implements additional goodies including a continuously updating ticker of headlines taken from the user's collection of subscribed feeds.

15 Mashing It Up — Leveraging Evolving Web APIs

The next step in this evolution came with the arrival of richer Web APIs — especially ones that defined a clean client/server separation. In this respect, the world of Web APIs is a somewhat mixed bag in that many Web sites equate a Web API with a JS-based API that can be exclusively invoked from within a Web-Browser run-time. The issue with that type of API binding is that the only runtime that is supported is a full-blown Web browser; but the arrival of native mobile apps has actually proven a net positive in encouraging sites to create a cleaner separation. Emacspeak has leveraged these APIs to create Emacspeak front-ends to many useful services, here are a few:

  1. Minibuffer completion for Google Search using Google Suggest to provide completions.
  2. Librivox for browsing and playing free audio books.
  3. NPR for browsing and playing NPR archived programs.
  4. BBC for playing a wide variety of streaming content available from the BBC.
  5. A Google Maps front-end that provides instantaneous access to directions and Places search.
  6. Access to Twitter via package twittering-mode.

And a lot more than will fit this margin! This is an example of generalizing the concept of a mashup as seen on the Web with respect to creating hybrid applications by bringing together a collection of different Web APIs. Another way to think of such separation is to view an application as a head and a body — where the head is a specific user interface, with the body implementing the application logic. A cleanly defined separation between the head and body allows one to attach different user interfaces i.e., heads to the given body without any loss of functionality, or the need to re-implement the entire application. Modern platforms like Android enable such separation via an Intent mechanism. The Web platform as originally defined around URLs is actually well-suited to this type of separation — though the full potential of this design pattern remains to be fully realized given today's tight association of the Web to the Web Browser.

16 Conclusion

In 1996, I wrote an article entitled User Interface — A Means To An End pointing out that the size and shape of computers were determined by the keyboard and display. This is even more true in today's world of tablets, phablets and large-sized phones — with the only difference that the keyboard has been replaced by a touch screen. The next generation in the evolution of personal devices is that they will become truly personal by being wearables — this once again forces a separation of the user interface peripherals from the underlying compute engine. Imagine a variety of wearables that collectively connect to ones cell phone, which itself connects to the cloud for all its computational and information needs. Such an environment is rich in possibilities for creating a wide variety of user experiences to a single underlying body of information; Eyes-Free interfaces as pioneered by systems like Emacspeak will come to play an increasingly vital role alongside visual interaction when this comes to pass.

–T.V. Raman, San Jose, CA, September 12, 2014

17 References

-1:-- Emacspeak At Twenty: Looking Back, Looking Forward (Post T. V. Raman ( 15, 2014 09:11 PM

Julien Danjou: Python bad practice, a concrete case

A lot of people read up on good Python practice, and there's plenty of information about that on the Internet. Many tips are included in the book I wrote this year, The Hacker's Guide to Python. Today I'd like to show a concrete case of code that I don't consider being the state of the art.

In my last article where I talked about my new project Gnocchi, I wrote about how I tested, hacked and then ditched whisper out. Here I'm going to explain part of my thought process and a few things that raised my eyebrows when hacking this code.

Before I start, please don't get the spirit of this article wrong. It's in no way a personal attack to the authors and contributors (who I don't know). Furthermore, whisper is a piece of code that is in production in thousands of installation, storing metrics for years. While I can argue that I consider the code not to be following best practice, it definitely works well enough and is worthy to a lot of people.


The first thing that I noticed when trying to hack on whisper, is the lack of test. There's only one file containing tests, named, and the coverage it provides is pretty low. One can check that using the coverage tool.

$ coverage run
Ran 11 tests in 0.014s
$ coverage report
Name Stmts Miss Cover
test_whisper 134 4 97%
whisper 584 227 61%
TOTAL 718 231 67%

While one would think that 61% is "not so bad", taking a quick peak at the actual test code shows that the tests are incomplete. Why I mean by incomplete is that they for example use the library to store values into a database, but they never check if the results can be fetched and if the fetched results are accurate. Here's a good reason one should never blindly trust the test cover percentage as a quality metric.

When I tried to modify whisper, as the tests do not check the entire cycle of the values fed into the database, I ended up doing wrong changes but had the tests still pass.

No PEP 8, no Python 3

The code doesn't respect PEP 8 . A run of flake8 + hacking shows 732 errors… While it does not impact the code itself, it's more painful to hack on it than it is on most Python projects.

The hacking tool also shows that the code is not Python 3 ready as there is usage of Python 2 only syntax.

A good way to fix that would be to set up tox and adds a few targets for PEP 8 checks and Python 3 tests. Even if the test suite is not complete, starting by having flake8 run without errors and the few unit tests working with Python 3 should put the project in a better light.

Not using idiomatic Python

A lot of the code could be simplified by using idiomatic Python. Let's take a simple example:

def fetch(path,fromTime,untilTime=None,now=None):
fh = None
fh = open(path,'rb')
return file_fetch(fh, fromTime, untilTime, now)
if fh:

That piece of code could be easily rewritten as:

def fetch(path,fromTime,untilTime=None,now=None):
with open(path, 'rb') as fh:
return file_fetch(fh, fromTime, untilTime, now)

This way, the function looks actually so simple that one can even wonder why it should exists – but why not.

Usage of loops could also be made more Pythonic:

for i,archive in enumerate(archiveList):
if i == len(archiveList) - 1:

could be actually:

for i, archive in itertools.islice(archiveList, len(archiveList) - 1):

That reduce the code size and makes it easier to read through the code.

Wrong abstraction level

Also, one thing that I noticed in whisper, is that it abstracts its features at the wrong level.

Take the create() function, it's pretty obvious:

def create(path,archiveList,xFilesFactor=None,aggregationMethod=None,sparse=False,useFallocate=False):
# Set default params
if xFilesFactor is None:
xFilesFactor = 0.5
if aggregationMethod is None:
aggregationMethod = 'average'
#Validate archive configurations...
#Looks good, now we create the file and write the header
if os.path.exists(path):
raise InvalidConfiguration("File %s already exists!" % path)
fh = None
fh = open(path,'wb')
if LOCK:
fcntl.flock( fh.fileno(), fcntl.LOCK_EX )
aggregationType = struct.pack( longFormat, aggregationMethodToType.get(aggregationMethod, 1) )
oldest = max([secondsPerPoint * points for secondsPerPoint,points in archiveList])
maxRetention = struct.pack( longFormat, oldest )
xFilesFactor = struct.pack( floatFormat, float(xFilesFactor) )
archiveCount = struct.pack(longFormat, len(archiveList))
packedMetadata = aggregationType + maxRetention + xFilesFactor + archiveCount
headerSize = metadataSize + (archiveInfoSize * len(archiveList))
archiveOffsetPointer = headerSize
for secondsPerPoint,points in archiveList:
archiveInfo = struct.pack(archiveInfoFormat, archiveOffsetPointer, secondsPerPoint, points)
archiveOffsetPointer += (points * pointSize)
#If configured to use fallocate and capable of fallocate use that, else
#attempt sparse if configure or zero pre-allocate if sparse isn't configured.
if CAN_FALLOCATE and useFallocate:
remaining = archiveOffsetPointer - headerSize
fallocate(fh, headerSize, remaining)
elif sparse: - 1)
remaining = archiveOffsetPointer - headerSize
chunksize = 16384
zeroes = '\x00' * chunksize
while remaining > chunksize:
remaining -= chunksize
if fh:

The function is doing everything: checking if the file doesn't exist already, opening it, building the structured data, writing this, building more structure, then writing that, etc.

That means that the caller has to give a file path, even if it just wants a whipser data structure to store itself elsewhere. StringIO() could be used to fake a file handler, but it will fail if the call to fcntl.flock() is not disabled – and it is inefficient anyway.

There's a lot of other functions in the code, such as for example setAggregationMethod(), that mixes the handling of the files – even doing things like os.fsync() – while manipulating structured data. This is definitely not a good design, especially for a library, as it turns out reusing the function in different context is near impossible.

Race conditions

There are race conditions, for example in create() (see added comment):

if os.path.exists(path):
raise InvalidConfiguration("File %s already exists!" % path)
fh = None
fh = open(path,'wb')

That code should be:

fh = os.fdopen(, os.O_WRONLY | os.O_CREAT | os.O_EXCL), 'wb')
except OSError as e:
if e.errno = errno.EEXIST:
raise InvalidConfiguration("File %s already exists!" % path)

to avoid any race condition.

Unwanted optimization

We saw earlier the fetch() function that is barely useful, so let's take a look at the file_fetch() function that it's calling.

def file_fetch(fh, fromTime, untilTime, now = None):
header = __readHeader(fh)

The first thing the function does is to read the header from the file handler. Let's take a look at that function:

def __readHeader(fh):
info = __headerCache.get(
if info:
return info
originalOffset = fh.tell()
packedMetadata =
(aggregationType,maxRetention,xff,archiveCount) = struct.unpack(metadataFormat,packedMetadata)
raise CorruptWhisperFile("Unable to read header",

The first thing the function does is to look into a cache. Why is there a cache?

It actually caches the header based with an index based on the file path ( Except that if one for example decide not to use file and cheat using StringIO, then it does not have any name attribute. So this code path will raise an AttributeError.

One has to set a fake name manually on the StringIO instance, and it must be unique so nobody messes with the cache

import StringIO
packedMetadata = <some source>
fh = StringIO.StringIO(packedMetadata) = "myfakename"
header = __readHeader(fh)

The cache may actually be useful when accessing files, but it's definitely useless when not using files. But it's not necessarily true that the complexity (even if small) that the cache adds is worth it. I doubt most of whisper based tools are long run processes, so the cache that is really used when accessing the files is the one handled by the operating system kernel, and this one is going to be much more efficient anyway, and shared between processed. There's also no expiry of that cache, which could end up of tons of memory used and wasted.


None of the docstrings are written in a a parsable syntax like Sphinx. This means you cannot generate any documentation in a nice format that a developer using the library could read easily.

The documentation is also not up to date:

def fetch(path,fromTime,untilTime=None,now=None):
def create(path,archiveList,xFilesFactor=None,aggregationMethod=None,sparse=False,useFallocate=False):

This is something that could be avoided if a proper format was picked to write the docstring. A tool cool be used to be noticed when there's a diversion between the actual function signature and the documented one, like missing an argument.

Duplicated code

Last but not least, there's a lot of code that is duplicated around in the scripts provided by whisper in its bin directory. Theses scripts should be very lightweight and be using the console_scripts facility of setuptools, but they actually contains a lot of (untested) code. Furthermore, some of that code is partially duplicated from the library which is against DRY.


There are a few more things that made me stop considering whisper, but these are part of the whisper features, not necessarily code quality. One can also point out that the code is very condensed and hard to read, and that's a more general problem about how it is organized and abstracted.

A lot of these defects are actually points that made me start writing The Hacker's Guide to Python a year ago. Running into this kind of code makes me think it was a really good idea to write a book on advice to write better Python code!

A book I wrote talking about designing Python applications, state of the art, advice to apply when building your application, various Python tips, etc. Interested? Check it out.

-1:-- Python bad practice, a concrete case (Post Julien Danjou)--L0--C0--September 15, 2014 11:09 AM